00:00:00.001 Started by upstream project "autotest-per-patch" build number 132523 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.066 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.067 The recommended git tool is: git 00:00:00.067 using credential 00000000-0000-0000-0000-000000000002 00:00:00.068 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.093 Fetching changes from the remote Git repository 00:00:00.096 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.126 Using shallow fetch with depth 1 00:00:00.126 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.126 > git --version # timeout=10 00:00:00.150 > git --version # 'git version 2.39.2' 00:00:00.150 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.167 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.167 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.990 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.002 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.016 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.016 > git config core.sparsecheckout # timeout=10 00:00:04.026 > git read-tree -mu HEAD # timeout=10 00:00:04.042 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.073 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.073 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.186 [Pipeline] Start of Pipeline 00:00:04.203 [Pipeline] library 00:00:04.206 Loading library shm_lib@master 00:00:05.695 Library shm_lib@master is cached. Copying from home. 00:00:05.748 [Pipeline] node 00:00:05.834 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:05.836 [Pipeline] { 00:00:05.846 [Pipeline] catchError 00:00:05.851 [Pipeline] { 00:00:05.869 [Pipeline] wrap 00:00:05.879 [Pipeline] { 00:00:05.895 [Pipeline] stage 00:00:05.898 [Pipeline] { (Prologue) 00:00:05.917 [Pipeline] echo 00:00:05.920 Node: VM-host-SM38 00:00:05.927 [Pipeline] cleanWs 00:00:05.941 [WS-CLEANUP] Deleting project workspace... 00:00:05.941 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.968 [WS-CLEANUP] done 00:00:06.202 [Pipeline] setCustomBuildProperty 00:00:06.264 [Pipeline] httpRequest 00:00:09.287 [Pipeline] echo 00:00:09.289 Sorcerer 10.211.164.20 is dead 00:00:09.295 [Pipeline] httpRequest 00:00:10.357 [Pipeline] echo 00:00:10.359 Sorcerer 10.211.164.101 is alive 00:00:10.370 [Pipeline] retry 00:00:10.372 [Pipeline] { 00:00:10.386 [Pipeline] httpRequest 00:00:10.392 HttpMethod: GET 00:00:10.393 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.393 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.394 Response Code: HTTP/1.1 200 OK 00:00:10.395 Success: Status code 200 is in the accepted range: 200,404 00:00:10.396 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.622 [Pipeline] } 00:00:10.640 [Pipeline] // retry 00:00:10.650 [Pipeline] sh 00:00:10.934 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.948 [Pipeline] httpRequest 00:00:11.346 [Pipeline] echo 00:00:11.348 Sorcerer 10.211.164.101 is alive 00:00:11.356 [Pipeline] retry 00:00:11.358 [Pipeline] { 00:00:11.372 [Pipeline] httpRequest 00:00:11.377 HttpMethod: GET 00:00:11.378 URL: http://10.211.164.101/packages/spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:00:11.379 Sending request to url: http://10.211.164.101/packages/spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:00:11.380 Response Code: HTTP/1.1 404 Not Found 00:00:11.380 Success: Status code 404 is in the accepted range: 200,404 00:00:11.381 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:00:11.384 [Pipeline] } 00:00:11.401 [Pipeline] // retry 00:00:11.409 [Pipeline] sh 00:00:11.698 + rm -f spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:00:11.714 [Pipeline] retry 00:00:11.715 [Pipeline] { 00:00:11.735 [Pipeline] checkout 00:00:11.743 The recommended git tool is: NONE 00:00:11.768 using credential 00000000-0000-0000-0000-000000000002 00:00:11.770 Wiping out workspace first. 00:00:11.779 Cloning the remote Git repository 00:00:11.783 Honoring refspec on initial clone 00:00:11.784 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:11.784 > git init /var/jenkins/workspace/nvme-vg-autotest_2/spdk # timeout=10 00:00:11.797 Using reference repository: /var/ci_repos/spdk_multi 00:00:11.797 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:11.797 > git --version # timeout=10 00:00:11.800 > git --version # 'git version 2.25.1' 00:00:11.801 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:11.804 Setting http proxy: proxy-dmz.intel.com:911 00:00:11.804 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/23/25423/2 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:25.923 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:25.928 > git config --add remote.origin.fetch refs/changes/23/25423/2 # timeout=10 00:00:25.947 Avoid second fetch 00:00:25.964 Checking out Revision a9e1e4309cdc83028f205f483fd163a9ff0da22f (FETCH_HEAD) 00:00:26.287 Commit message: "nvmf: discovery log page updation change" 00:00:25.931 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:25.945 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:25.953 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:25.962 > git config core.sparsecheckout # timeout=10 00:00:25.965 > git checkout -f a9e1e4309cdc83028f205f483fd163a9ff0da22f # timeout=10 00:00:26.286 > git rev-list --no-walk 09958c1257bd95fbe407610d7d1e17190c316469 # timeout=10 00:00:26.315 > git remote # timeout=10 00:00:26.318 > git submodule init # timeout=10 00:00:26.385 > git submodule sync # timeout=10 00:00:26.452 > git config --get remote.origin.url # timeout=10 00:00:26.460 > git submodule init # timeout=10 00:00:26.532 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:26.536 > git config --get submodule.dpdk.url # timeout=10 00:00:26.543 > git remote # timeout=10 00:00:26.547 > git config --get remote.origin.url # timeout=10 00:00:26.550 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:26.553 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:26.556 > git remote # timeout=10 00:00:26.559 > git config --get remote.origin.url # timeout=10 00:00:26.562 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:26.566 > git config --get submodule.isa-l.url # timeout=10 00:00:26.569 > git remote # timeout=10 00:00:26.572 > git config --get remote.origin.url # timeout=10 00:00:26.575 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:26.578 > git config --get submodule.ocf.url # timeout=10 00:00:26.582 > git remote # timeout=10 00:00:26.585 > git config --get remote.origin.url # timeout=10 00:00:26.589 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:26.593 > git config --get submodule.libvfio-user.url # timeout=10 00:00:26.605 > git remote # timeout=10 00:00:26.610 > git config --get remote.origin.url # timeout=10 00:00:26.614 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:26.617 > git config --get submodule.xnvme.url # timeout=10 00:00:26.628 > git remote # timeout=10 00:00:26.631 > git config --get remote.origin.url # timeout=10 00:00:26.636 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:26.639 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:26.643 > git remote # timeout=10 00:00:26.646 > git config --get remote.origin.url # timeout=10 00:00:26.649 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:26.654 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:26.654 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:26.655 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:26.655 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:26.656 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:26.656 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:26.657 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:26.658 Setting http proxy: proxy-dmz.intel.com:911 00:00:26.658 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:26.658 Setting http proxy: proxy-dmz.intel.com:911 00:00:26.658 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:26.659 Setting http proxy: proxy-dmz.intel.com:911 00:00:26.659 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:26.659 Setting http proxy: proxy-dmz.intel.com:911 00:00:26.659 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:26.660 Setting http proxy: proxy-dmz.intel.com:911 00:00:26.660 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:26.660 Setting http proxy: proxy-dmz.intel.com:911 00:00:26.660 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:26.660 Setting http proxy: proxy-dmz.intel.com:911 00:00:26.660 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:34.945 [Pipeline] dir 00:00:34.945 Running in /var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:34.947 [Pipeline] { 00:00:34.961 [Pipeline] sh 00:00:35.251 ++ nproc 00:00:35.251 + threads=144 00:00:35.251 + git repack -a -d --threads=144 00:00:39.466 + git submodule foreach git repack -a -d --threads=144 00:00:39.466 Entering 'dpdk' 00:00:42.770 Entering 'intel-ipsec-mb' 00:00:42.770 Entering 'isa-l' 00:00:43.031 Entering 'isa-l-crypto' 00:00:43.031 Entering 'libvfio-user' 00:00:43.293 Entering 'ocf' 00:00:43.555 Entering 'xnvme' 00:00:44.129 + find .git -type f -name alternates -print -delete 00:00:44.129 .git/objects/info/alternates 00:00:44.129 .git/modules/isa-l/objects/info/alternates 00:00:44.129 .git/modules/dpdk/objects/info/alternates 00:00:44.129 .git/modules/xnvme/objects/info/alternates 00:00:44.129 .git/modules/libvfio-user/objects/info/alternates 00:00:44.129 .git/modules/intel-ipsec-mb/objects/info/alternates 00:00:44.129 .git/modules/ocf/objects/info/alternates 00:00:44.129 .git/modules/isa-l-crypto/objects/info/alternates 00:00:44.141 [Pipeline] } 00:00:44.158 [Pipeline] // dir 00:00:44.163 [Pipeline] } 00:00:44.181 [Pipeline] // retry 00:00:44.189 [Pipeline] sh 00:00:44.478 + hash pigz 00:00:44.478 + tar -czf spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz spdk 00:00:56.732 [Pipeline] retry 00:00:56.735 [Pipeline] { 00:00:56.750 [Pipeline] httpRequest 00:00:56.759 HttpMethod: PUT 00:00:56.759 URL: http://10.211.164.101/cgi-bin/sorcerer.py?group=packages&filename=spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:00:56.760 Sending request to url: http://10.211.164.101/cgi-bin/sorcerer.py?group=packages&filename=spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:01:00.909 Response Code: HTTP/1.1 200 OK 00:01:00.918 Success: Status code 200 is in the accepted range: 200 00:01:00.921 [Pipeline] } 00:01:00.935 [Pipeline] // retry 00:01:00.940 [Pipeline] echo 00:01:00.941 00:01:00.941 Locking 00:01:00.941 Waited 1s for lock 00:01:00.941 File already exists: /storage/packages/spdk_a9e1e4309cdc83028f205f483fd163a9ff0da22f.tar.gz 00:01:00.941 00:01:00.944 [Pipeline] sh 00:01:01.227 + git -C spdk log --oneline -n5 00:01:01.227 a9e1e4309 nvmf: discovery log page updation change 00:01:01.227 2a91567e4 CHANGELOG.md: corrected typo 00:01:01.227 6c35d974e lib/nvme: destruct controllers that failed init asynchronously 00:01:01.227 414f91a0c lib/nvmf: Fix double free of connect request 00:01:01.227 d8f6e798d nvme: Fix discovery loop when target has no entry 00:01:01.249 [Pipeline] writeFile 00:01:01.264 [Pipeline] sh 00:01:01.550 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:01.563 [Pipeline] sh 00:01:01.849 + cat autorun-spdk.conf 00:01:01.849 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.849 SPDK_TEST_NVME=1 00:01:01.849 SPDK_TEST_FTL=1 00:01:01.849 SPDK_TEST_ISAL=1 00:01:01.849 SPDK_RUN_ASAN=1 00:01:01.849 SPDK_RUN_UBSAN=1 00:01:01.849 SPDK_TEST_XNVME=1 00:01:01.849 SPDK_TEST_NVME_FDP=1 00:01:01.849 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:01.857 RUN_NIGHTLY=0 00:01:01.859 [Pipeline] } 00:01:01.873 [Pipeline] // stage 00:01:01.888 [Pipeline] stage 00:01:01.891 [Pipeline] { (Run VM) 00:01:01.903 [Pipeline] sh 00:01:02.189 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:02.189 + echo 'Start stage prepare_nvme.sh' 00:01:02.189 Start stage prepare_nvme.sh 00:01:02.189 + [[ -n 10 ]] 00:01:02.189 + disk_prefix=ex10 00:01:02.189 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:02.189 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:02.189 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:02.189 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.189 ++ SPDK_TEST_NVME=1 00:01:02.189 ++ SPDK_TEST_FTL=1 00:01:02.189 ++ SPDK_TEST_ISAL=1 00:01:02.189 ++ SPDK_RUN_ASAN=1 00:01:02.189 ++ SPDK_RUN_UBSAN=1 00:01:02.189 ++ SPDK_TEST_XNVME=1 00:01:02.189 ++ SPDK_TEST_NVME_FDP=1 00:01:02.189 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:02.189 ++ RUN_NIGHTLY=0 00:01:02.189 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:02.189 + nvme_files=() 00:01:02.189 + declare -A nvme_files 00:01:02.189 + backend_dir=/var/lib/libvirt/images/backends 00:01:02.189 + nvme_files['nvme.img']=5G 00:01:02.189 + nvme_files['nvme-cmb.img']=5G 00:01:02.189 + nvme_files['nvme-multi0.img']=4G 00:01:02.189 + nvme_files['nvme-multi1.img']=4G 00:01:02.189 + nvme_files['nvme-multi2.img']=4G 00:01:02.189 + nvme_files['nvme-openstack.img']=8G 00:01:02.189 + nvme_files['nvme-zns.img']=5G 00:01:02.189 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:02.189 + (( SPDK_TEST_FTL == 1 )) 00:01:02.189 + nvme_files["nvme-ftl.img"]=6G 00:01:02.189 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:02.189 + nvme_files["nvme-fdp.img"]=1G 00:01:02.189 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:02.189 + for nvme in "${!nvme_files[@]}" 00:01:02.189 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:02.189 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:02.189 + for nvme in "${!nvme_files[@]}" 00:01:02.189 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:02.449 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:02.449 + for nvme in "${!nvme_files[@]}" 00:01:02.449 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:02.449 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:02.449 + for nvme in "${!nvme_files[@]}" 00:01:02.449 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:02.449 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:02.449 + for nvme in "${!nvme_files[@]}" 00:01:02.449 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:03.394 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.394 + for nvme in "${!nvme_files[@]}" 00:01:03.394 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:03.394 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.394 + for nvme in "${!nvme_files[@]}" 00:01:03.394 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:03.394 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.394 + for nvme in "${!nvme_files[@]}" 00:01:03.394 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:03.394 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:03.394 + for nvme in "${!nvme_files[@]}" 00:01:03.394 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:03.967 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.967 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:03.967 + echo 'End stage prepare_nvme.sh' 00:01:03.967 End stage prepare_nvme.sh 00:01:03.981 [Pipeline] sh 00:01:04.268 + DISTRO=fedora39 00:01:04.268 + CPUS=10 00:01:04.268 + RAM=12288 00:01:04.268 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:04.268 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:04.268 00:01:04.268 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:04.268 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:04.268 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:04.268 HELP=0 00:01:04.268 DRY_RUN=0 00:01:04.268 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:04.268 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:04.268 NVME_AUTO_CREATE=0 00:01:04.268 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:04.268 NVME_CMB=,,,, 00:01:04.268 NVME_PMR=,,,, 00:01:04.268 NVME_ZNS=,,,, 00:01:04.268 NVME_MS=true,,,, 00:01:04.268 NVME_FDP=,,,on, 00:01:04.268 SPDK_VAGRANT_DISTRO=fedora39 00:01:04.268 SPDK_VAGRANT_VMCPU=10 00:01:04.268 SPDK_VAGRANT_VMRAM=12288 00:01:04.268 SPDK_VAGRANT_PROVIDER=libvirt 00:01:04.268 SPDK_VAGRANT_HTTP_PROXY= 00:01:04.268 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:04.268 SPDK_OPENSTACK_NETWORK=0 00:01:04.268 VAGRANT_PACKAGE_BOX=0 00:01:04.268 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:04.268 FORCE_DISTRO=true 00:01:04.268 VAGRANT_BOX_VERSION= 00:01:04.268 EXTRA_VAGRANTFILES= 00:01:04.268 NIC_MODEL=e1000 00:01:04.268 00:01:04.268 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:01:04.268 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:06.818 Bringing machine 'default' up with 'libvirt' provider... 00:01:07.393 ==> default: Creating image (snapshot of base box volume). 00:01:07.393 ==> default: Creating domain with the following settings... 00:01:07.393 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732626835_1248bb275a9c8ce8cd51 00:01:07.393 ==> default: -- Domain type: kvm 00:01:07.393 ==> default: -- Cpus: 10 00:01:07.393 ==> default: -- Feature: acpi 00:01:07.393 ==> default: -- Feature: apic 00:01:07.393 ==> default: -- Feature: pae 00:01:07.393 ==> default: -- Memory: 12288M 00:01:07.393 ==> default: -- Memory Backing: hugepages: 00:01:07.393 ==> default: -- Management MAC: 00:01:07.393 ==> default: -- Loader: 00:01:07.393 ==> default: -- Nvram: 00:01:07.393 ==> default: -- Base box: spdk/fedora39 00:01:07.393 ==> default: -- Storage pool: default 00:01:07.393 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732626835_1248bb275a9c8ce8cd51.img (20G) 00:01:07.393 ==> default: -- Volume Cache: default 00:01:07.393 ==> default: -- Kernel: 00:01:07.393 ==> default: -- Initrd: 00:01:07.393 ==> default: -- Graphics Type: vnc 00:01:07.393 ==> default: -- Graphics Port: -1 00:01:07.393 ==> default: -- Graphics IP: 127.0.0.1 00:01:07.393 ==> default: -- Graphics Password: Not defined 00:01:07.393 ==> default: -- Video Type: cirrus 00:01:07.393 ==> default: -- Video VRAM: 9216 00:01:07.393 ==> default: -- Sound Type: 00:01:07.393 ==> default: -- Keymap: en-us 00:01:07.393 ==> default: -- TPM Path: 00:01:07.393 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:07.393 ==> default: -- Command line args: 00:01:07.393 ==> default: -> value=-device, 00:01:07.393 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:07.393 ==> default: -> value=-drive, 00:01:07.393 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:07.393 ==> default: -> value=-device, 00:01:07.393 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:07.393 ==> default: -> value=-device, 00:01:07.393 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:07.393 ==> default: -> value=-drive, 00:01:07.393 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:07.393 ==> default: -> value=-device, 00:01:07.393 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.393 ==> default: -> value=-device, 00:01:07.393 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:07.393 ==> default: -> value=-drive, 00:01:07.393 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:07.393 ==> default: -> value=-device, 00:01:07.393 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.393 ==> default: -> value=-drive, 00:01:07.393 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:07.393 ==> default: -> value=-device, 00:01:07.393 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.393 ==> default: -> value=-drive, 00:01:07.393 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:07.393 ==> default: -> value=-device, 00:01:07.393 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.393 ==> default: -> value=-device, 00:01:07.394 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:07.394 ==> default: -> value=-device, 00:01:07.394 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:07.394 ==> default: -> value=-drive, 00:01:07.394 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:07.394 ==> default: -> value=-device, 00:01:07.394 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.653 ==> default: Creating shared folders metadata... 00:01:07.653 ==> default: Starting domain. 00:01:09.030 ==> default: Waiting for domain to get an IP address... 00:01:27.157 ==> default: Waiting for SSH to become available... 00:01:27.157 ==> default: Configuring and enabling network interfaces... 00:01:30.468 default: SSH address: 192.168.121.151:22 00:01:30.468 default: SSH username: vagrant 00:01:30.468 default: SSH auth method: private key 00:01:32.380 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:40.527 ==> default: Mounting SSHFS shared folder... 00:01:42.448 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:42.448 ==> default: Checking Mount.. 00:01:43.839 ==> default: Folder Successfully Mounted! 00:01:43.839 00:01:43.839 SUCCESS! 00:01:43.839 00:01:43.839 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:43.839 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:43.839 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:43.839 00:01:43.850 [Pipeline] } 00:01:43.865 [Pipeline] // stage 00:01:43.875 [Pipeline] dir 00:01:43.876 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:43.877 [Pipeline] { 00:01:43.890 [Pipeline] catchError 00:01:43.892 [Pipeline] { 00:01:43.904 [Pipeline] sh 00:01:44.190 + vagrant ssh-config --host vagrant 00:01:44.190 + sed -ne '/^Host/,$p' 00:01:44.190 + tee ssh_conf 00:01:47.497 Host vagrant 00:01:47.497 HostName 192.168.121.151 00:01:47.497 User vagrant 00:01:47.497 Port 22 00:01:47.497 UserKnownHostsFile /dev/null 00:01:47.497 StrictHostKeyChecking no 00:01:47.497 PasswordAuthentication no 00:01:47.497 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:47.497 IdentitiesOnly yes 00:01:47.497 LogLevel FATAL 00:01:47.498 ForwardAgent yes 00:01:47.498 ForwardX11 yes 00:01:47.498 00:01:47.512 [Pipeline] withEnv 00:01:47.514 [Pipeline] { 00:01:47.527 [Pipeline] sh 00:01:47.809 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:47.810 source /etc/os-release 00:01:47.810 [[ -e /image.version ]] && img=$(< /image.version) 00:01:47.810 # Minimal, systemd-like check. 00:01:47.810 if [[ -e /.dockerenv ]]; then 00:01:47.810 # Clear garbage from the node'\''s name: 00:01:47.810 # agt-er_autotest_547-896 -> autotest_547-896 00:01:47.810 # $HOSTNAME is the actual container id 00:01:47.810 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:47.810 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:47.810 # We can assume this is a mount from a host where container is running, 00:01:47.810 # so fetch its hostname to easily identify the target swarm worker. 00:01:47.810 container="$(< /etc/hostname) ($agent)" 00:01:47.810 else 00:01:47.810 # Fallback 00:01:47.810 container=$agent 00:01:47.810 fi 00:01:47.810 fi 00:01:47.810 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:47.810 ' 00:01:48.084 [Pipeline] } 00:01:48.100 [Pipeline] // withEnv 00:01:48.107 [Pipeline] setCustomBuildProperty 00:01:48.123 [Pipeline] stage 00:01:48.125 [Pipeline] { (Tests) 00:01:48.140 [Pipeline] sh 00:01:48.422 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:48.695 [Pipeline] sh 00:01:48.976 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:49.252 [Pipeline] timeout 00:01:49.252 Timeout set to expire in 50 min 00:01:49.254 [Pipeline] { 00:01:49.268 [Pipeline] sh 00:01:49.554 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:50.130 HEAD is now at a9e1e4309 nvmf: discovery log page updation change 00:01:50.145 [Pipeline] sh 00:01:50.432 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:50.726 [Pipeline] sh 00:01:51.019 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.299 [Pipeline] sh 00:01:51.586 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:51.847 ++ readlink -f spdk_repo 00:01:51.847 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:51.847 + [[ -n /home/vagrant/spdk_repo ]] 00:01:51.847 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:51.847 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:51.847 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:51.848 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:51.848 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:51.848 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:51.848 + cd /home/vagrant/spdk_repo 00:01:51.848 + source /etc/os-release 00:01:51.848 ++ NAME='Fedora Linux' 00:01:51.848 ++ VERSION='39 (Cloud Edition)' 00:01:51.848 ++ ID=fedora 00:01:51.848 ++ VERSION_ID=39 00:01:51.848 ++ VERSION_CODENAME= 00:01:51.848 ++ PLATFORM_ID=platform:f39 00:01:51.848 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:51.848 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:51.848 ++ LOGO=fedora-logo-icon 00:01:51.848 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:51.848 ++ HOME_URL=https://fedoraproject.org/ 00:01:51.848 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:51.848 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:51.848 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:51.848 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:51.848 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:51.848 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:51.848 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:51.848 ++ SUPPORT_END=2024-11-12 00:01:51.848 ++ VARIANT='Cloud Edition' 00:01:51.848 ++ VARIANT_ID=cloud 00:01:51.848 + uname -a 00:01:51.848 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:51.848 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:52.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:52.372 Hugepages 00:01:52.372 node hugesize free / total 00:01:52.372 node0 1048576kB 0 / 0 00:01:52.372 node0 2048kB 0 / 0 00:01:52.372 00:01:52.372 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:52.635 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:52.635 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:52.635 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:52.635 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:52.635 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:52.635 + rm -f /tmp/spdk-ld-path 00:01:52.635 + source autorun-spdk.conf 00:01:52.635 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.635 ++ SPDK_TEST_NVME=1 00:01:52.635 ++ SPDK_TEST_FTL=1 00:01:52.635 ++ SPDK_TEST_ISAL=1 00:01:52.635 ++ SPDK_RUN_ASAN=1 00:01:52.635 ++ SPDK_RUN_UBSAN=1 00:01:52.635 ++ SPDK_TEST_XNVME=1 00:01:52.635 ++ SPDK_TEST_NVME_FDP=1 00:01:52.635 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.635 ++ RUN_NIGHTLY=0 00:01:52.635 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:52.635 + [[ -n '' ]] 00:01:52.635 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:52.635 + for M in /var/spdk/build-*-manifest.txt 00:01:52.635 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:52.635 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.635 + for M in /var/spdk/build-*-manifest.txt 00:01:52.635 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:52.635 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.635 + for M in /var/spdk/build-*-manifest.txt 00:01:52.635 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:52.635 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.635 ++ uname 00:01:52.635 + [[ Linux == \L\i\n\u\x ]] 00:01:52.635 + sudo dmesg -T 00:01:52.635 + sudo dmesg --clear 00:01:52.635 + dmesg_pid=5033 00:01:52.635 + [[ Fedora Linux == FreeBSD ]] 00:01:52.635 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.635 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.635 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:52.635 + [[ -x /usr/src/fio-static/fio ]] 00:01:52.635 + sudo dmesg -Tw 00:01:52.635 + export FIO_BIN=/usr/src/fio-static/fio 00:01:52.635 + FIO_BIN=/usr/src/fio-static/fio 00:01:52.635 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:52.635 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:52.635 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:52.635 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.635 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.635 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:52.635 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.635 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.635 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.898 13:14:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:52.898 13:14:41 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.898 13:14:41 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.898 13:14:41 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:52.898 13:14:41 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:52.898 13:14:41 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:52.898 13:14:41 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:52.898 13:14:41 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:52.898 13:14:41 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:52.898 13:14:41 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:52.898 13:14:41 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.898 13:14:41 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:52.898 13:14:41 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:52.898 13:14:41 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.898 13:14:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:52.898 13:14:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:52.898 13:14:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:52.898 13:14:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:52.898 13:14:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:52.898 13:14:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:52.898 13:14:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.898 13:14:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.898 13:14:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.898 13:14:41 -- paths/export.sh@5 -- $ export PATH 00:01:52.898 13:14:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.898 13:14:41 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:52.898 13:14:41 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:52.898 13:14:41 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732626881.XXXXXX 00:01:52.898 13:14:41 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732626881.lAFEwZ 00:01:52.898 13:14:41 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:52.898 13:14:41 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:52.898 13:14:41 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:52.898 13:14:41 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:52.898 13:14:41 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:52.898 13:14:41 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:52.898 13:14:41 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:52.898 13:14:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.898 13:14:41 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:52.898 13:14:41 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:52.898 13:14:41 -- pm/common@17 -- $ local monitor 00:01:52.898 13:14:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.898 13:14:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.898 13:14:41 -- pm/common@25 -- $ sleep 1 00:01:52.898 13:14:41 -- pm/common@21 -- $ date +%s 00:01:52.898 13:14:41 -- pm/common@21 -- $ date +%s 00:01:52.898 13:14:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732626881 00:01:52.898 13:14:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732626881 00:01:52.898 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732626881_collect-cpu-load.pm.log 00:01:52.898 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732626881_collect-vmstat.pm.log 00:01:53.844 13:14:42 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:53.844 13:14:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:53.844 13:14:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:53.844 13:14:42 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:53.844 13:14:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:53.844 Tue Nov 26 01:14:42 PM UTC 2024 00:01:53.844 13:14:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:53.844 v25.01-pre-241-ga9e1e4309 00:01:53.844 13:14:42 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:53.844 13:14:42 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:53.844 13:14:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:53.844 13:14:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:53.844 13:14:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.844 ************************************ 00:01:53.844 START TEST asan 00:01:53.844 ************************************ 00:01:53.844 using asan 00:01:53.844 13:14:42 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:53.844 00:01:53.844 real 0m0.000s 00:01:53.844 user 0m0.000s 00:01:53.844 sys 0m0.000s 00:01:53.844 ************************************ 00:01:53.844 END TEST asan 00:01:53.844 ************************************ 00:01:53.844 13:14:42 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:53.844 13:14:42 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:53.844 13:14:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:53.844 13:14:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:53.844 13:14:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:53.844 13:14:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:53.844 13:14:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.106 ************************************ 00:01:54.106 START TEST ubsan 00:01:54.106 ************************************ 00:01:54.106 using ubsan 00:01:54.106 13:14:42 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:54.106 00:01:54.106 real 0m0.000s 00:01:54.106 user 0m0.000s 00:01:54.106 sys 0m0.000s 00:01:54.106 ************************************ 00:01:54.106 END TEST ubsan 00:01:54.106 ************************************ 00:01:54.106 13:14:42 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:54.106 13:14:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.106 13:14:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:54.106 13:14:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:54.106 13:14:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:54.106 13:14:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:54.106 13:14:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:54.106 13:14:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:54.106 13:14:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:54.106 13:14:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:54.106 13:14:42 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:54.106 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:54.106 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:54.679 Using 'verbs' RDMA provider 00:02:05.642 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:15.655 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:15.655 Creating mk/config.mk...done. 00:02:15.655 Creating mk/cc.flags.mk...done. 00:02:15.655 Type 'make' to build. 00:02:15.655 13:15:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:15.655 13:15:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:15.655 13:15:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.655 13:15:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.655 ************************************ 00:02:15.655 START TEST make 00:02:15.655 ************************************ 00:02:15.655 13:15:03 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:15.655 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:15.655 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:15.655 meson setup builddir \ 00:02:15.655 -Dwith-libaio=enabled \ 00:02:15.655 -Dwith-liburing=enabled \ 00:02:15.655 -Dwith-libvfn=disabled \ 00:02:15.655 -Dwith-spdk=disabled \ 00:02:15.655 -Dexamples=false \ 00:02:15.655 -Dtests=false \ 00:02:15.655 -Dtools=false && \ 00:02:15.655 meson compile -C builddir && \ 00:02:15.655 cd -) 00:02:15.655 make[1]: Nothing to be done for 'all'. 00:02:17.572 The Meson build system 00:02:17.572 Version: 1.5.0 00:02:17.572 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:17.572 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:17.572 Build type: native build 00:02:17.572 Project name: xnvme 00:02:17.572 Project version: 0.7.5 00:02:17.572 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:17.572 C linker for the host machine: cc ld.bfd 2.40-14 00:02:17.572 Host machine cpu family: x86_64 00:02:17.572 Host machine cpu: x86_64 00:02:17.572 Message: host_machine.system: linux 00:02:17.572 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:17.572 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:17.572 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:17.572 Run-time dependency threads found: YES 00:02:17.572 Has header "setupapi.h" : NO 00:02:17.572 Has header "linux/blkzoned.h" : YES 00:02:17.572 Has header "linux/blkzoned.h" : YES (cached) 00:02:17.572 Has header "libaio.h" : YES 00:02:17.572 Library aio found: YES 00:02:17.572 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.572 Run-time dependency liburing found: YES 2.2 00:02:17.572 Dependency libvfn skipped: feature with-libvfn disabled 00:02:17.572 Found CMake: /usr/bin/cmake (3.27.7) 00:02:17.572 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:17.572 Subproject spdk : skipped: feature with-spdk disabled 00:02:17.572 Run-time dependency appleframeworks found: NO (tried framework) 00:02:17.572 Run-time dependency appleframeworks found: NO (tried framework) 00:02:17.572 Library rt found: YES 00:02:17.572 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:17.572 Configuring xnvme_config.h using configuration 00:02:17.572 Configuring xnvme.spec using configuration 00:02:17.572 Run-time dependency bash-completion found: YES 2.11 00:02:17.572 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:17.572 Program cp found: YES (/usr/bin/cp) 00:02:17.572 Build targets in project: 3 00:02:17.572 00:02:17.572 xnvme 0.7.5 00:02:17.572 00:02:17.572 Subprojects 00:02:17.572 spdk : NO Feature 'with-spdk' disabled 00:02:17.572 00:02:17.572 User defined options 00:02:17.572 examples : false 00:02:17.572 tests : false 00:02:17.572 tools : false 00:02:17.572 with-libaio : enabled 00:02:17.572 with-liburing: enabled 00:02:17.572 with-libvfn : disabled 00:02:17.572 with-spdk : disabled 00:02:17.572 00:02:17.572 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.832 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:17.832 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:17.832 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:17.832 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:17.832 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:17.832 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:17.832 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:17.832 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:17.832 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:17.832 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:17.832 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:17.832 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:17.832 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:17.832 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:17.832 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:18.093 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:18.093 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:18.093 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:18.093 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:18.093 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:18.093 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:18.093 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:18.093 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:18.093 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:18.093 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:18.093 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:18.093 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:18.093 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:18.093 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:18.093 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:18.093 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:18.093 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:18.093 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:18.093 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:18.093 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:18.093 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:18.093 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:18.093 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:18.093 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:18.093 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:18.093 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:18.093 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:18.093 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:18.093 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:18.093 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:18.093 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:18.093 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:18.093 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:18.093 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:18.093 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:18.093 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:18.094 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:18.094 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:18.094 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:18.354 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:18.354 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:18.354 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:18.354 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:18.354 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:18.354 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:18.354 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:18.354 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:18.354 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:18.354 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:18.354 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:18.354 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:18.354 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:18.355 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:18.355 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:18.355 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:18.355 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:18.615 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:18.615 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:18.615 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:18.876 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:18.876 [75/76] Linking static target lib/libxnvme.a 00:02:18.876 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:18.876 INFO: autodetecting backend as ninja 00:02:18.876 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:18.877 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:25.494 The Meson build system 00:02:25.494 Version: 1.5.0 00:02:25.494 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:25.494 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:25.494 Build type: native build 00:02:25.494 Program cat found: YES (/usr/bin/cat) 00:02:25.494 Project name: DPDK 00:02:25.494 Project version: 24.03.0 00:02:25.494 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.494 C linker for the host machine: cc ld.bfd 2.40-14 00:02:25.494 Host machine cpu family: x86_64 00:02:25.494 Host machine cpu: x86_64 00:02:25.494 Message: ## Building in Developer Mode ## 00:02:25.494 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:25.494 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:25.494 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:25.494 Program python3 found: YES (/usr/bin/python3) 00:02:25.494 Program cat found: YES (/usr/bin/cat) 00:02:25.494 Compiler for C supports arguments -march=native: YES 00:02:25.494 Checking for size of "void *" : 8 00:02:25.494 Checking for size of "void *" : 8 (cached) 00:02:25.494 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:25.494 Library m found: YES 00:02:25.494 Library numa found: YES 00:02:25.494 Has header "numaif.h" : YES 00:02:25.495 Library fdt found: NO 00:02:25.495 Library execinfo found: NO 00:02:25.495 Has header "execinfo.h" : YES 00:02:25.495 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.495 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:25.495 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:25.495 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:25.495 Run-time dependency openssl found: YES 3.1.1 00:02:25.495 Run-time dependency libpcap found: YES 1.10.4 00:02:25.495 Has header "pcap.h" with dependency libpcap: YES 00:02:25.495 Compiler for C supports arguments -Wcast-qual: YES 00:02:25.495 Compiler for C supports arguments -Wdeprecated: YES 00:02:25.495 Compiler for C supports arguments -Wformat: YES 00:02:25.495 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:25.495 Compiler for C supports arguments -Wformat-security: NO 00:02:25.495 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.495 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:25.495 Compiler for C supports arguments -Wnested-externs: YES 00:02:25.495 Compiler for C supports arguments -Wold-style-definition: YES 00:02:25.495 Compiler for C supports arguments -Wpointer-arith: YES 00:02:25.495 Compiler for C supports arguments -Wsign-compare: YES 00:02:25.495 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:25.495 Compiler for C supports arguments -Wundef: YES 00:02:25.495 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.495 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:25.495 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:25.495 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.495 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:25.495 Program objdump found: YES (/usr/bin/objdump) 00:02:25.495 Compiler for C supports arguments -mavx512f: YES 00:02:25.495 Checking if "AVX512 checking" compiles: YES 00:02:25.495 Fetching value of define "__SSE4_2__" : 1 00:02:25.495 Fetching value of define "__AES__" : 1 00:02:25.495 Fetching value of define "__AVX__" : 1 00:02:25.495 Fetching value of define "__AVX2__" : 1 00:02:25.495 Fetching value of define "__AVX512BW__" : 1 00:02:25.495 Fetching value of define "__AVX512CD__" : 1 00:02:25.495 Fetching value of define "__AVX512DQ__" : 1 00:02:25.495 Fetching value of define "__AVX512F__" : 1 00:02:25.495 Fetching value of define "__AVX512VL__" : 1 00:02:25.495 Fetching value of define "__PCLMUL__" : 1 00:02:25.495 Fetching value of define "__RDRND__" : 1 00:02:25.495 Fetching value of define "__RDSEED__" : 1 00:02:25.495 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:25.495 Fetching value of define "__znver1__" : (undefined) 00:02:25.495 Fetching value of define "__znver2__" : (undefined) 00:02:25.495 Fetching value of define "__znver3__" : (undefined) 00:02:25.495 Fetching value of define "__znver4__" : (undefined) 00:02:25.495 Library asan found: YES 00:02:25.495 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:25.495 Message: lib/log: Defining dependency "log" 00:02:25.495 Message: lib/kvargs: Defining dependency "kvargs" 00:02:25.495 Message: lib/telemetry: Defining dependency "telemetry" 00:02:25.495 Library rt found: YES 00:02:25.495 Checking for function "getentropy" : NO 00:02:25.495 Message: lib/eal: Defining dependency "eal" 00:02:25.495 Message: lib/ring: Defining dependency "ring" 00:02:25.495 Message: lib/rcu: Defining dependency "rcu" 00:02:25.495 Message: lib/mempool: Defining dependency "mempool" 00:02:25.495 Message: lib/mbuf: Defining dependency "mbuf" 00:02:25.495 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:25.495 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:25.495 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:25.495 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:25.495 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:25.495 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:25.495 Compiler for C supports arguments -mpclmul: YES 00:02:25.495 Compiler for C supports arguments -maes: YES 00:02:25.495 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:25.495 Compiler for C supports arguments -mavx512bw: YES 00:02:25.495 Compiler for C supports arguments -mavx512dq: YES 00:02:25.495 Compiler for C supports arguments -mavx512vl: YES 00:02:25.495 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:25.495 Compiler for C supports arguments -mavx2: YES 00:02:25.495 Compiler for C supports arguments -mavx: YES 00:02:25.495 Message: lib/net: Defining dependency "net" 00:02:25.495 Message: lib/meter: Defining dependency "meter" 00:02:25.495 Message: lib/ethdev: Defining dependency "ethdev" 00:02:25.495 Message: lib/pci: Defining dependency "pci" 00:02:25.495 Message: lib/cmdline: Defining dependency "cmdline" 00:02:25.495 Message: lib/hash: Defining dependency "hash" 00:02:25.495 Message: lib/timer: Defining dependency "timer" 00:02:25.495 Message: lib/compressdev: Defining dependency "compressdev" 00:02:25.495 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:25.495 Message: lib/dmadev: Defining dependency "dmadev" 00:02:25.495 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:25.495 Message: lib/power: Defining dependency "power" 00:02:25.495 Message: lib/reorder: Defining dependency "reorder" 00:02:25.495 Message: lib/security: Defining dependency "security" 00:02:25.495 Has header "linux/userfaultfd.h" : YES 00:02:25.495 Has header "linux/vduse.h" : YES 00:02:25.495 Message: lib/vhost: Defining dependency "vhost" 00:02:25.495 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:25.495 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:25.495 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:25.495 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:25.495 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:25.495 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:25.495 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:25.495 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:25.495 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:25.495 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:25.495 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:25.495 Configuring doxy-api-html.conf using configuration 00:02:25.495 Configuring doxy-api-man.conf using configuration 00:02:25.495 Program mandb found: YES (/usr/bin/mandb) 00:02:25.495 Program sphinx-build found: NO 00:02:25.495 Configuring rte_build_config.h using configuration 00:02:25.495 Message: 00:02:25.495 ================= 00:02:25.495 Applications Enabled 00:02:25.495 ================= 00:02:25.495 00:02:25.495 apps: 00:02:25.495 00:02:25.495 00:02:25.495 Message: 00:02:25.495 ================= 00:02:25.495 Libraries Enabled 00:02:25.495 ================= 00:02:25.495 00:02:25.495 libs: 00:02:25.495 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:25.495 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:25.495 cryptodev, dmadev, power, reorder, security, vhost, 00:02:25.495 00:02:25.495 Message: 00:02:25.495 =============== 00:02:25.495 Drivers Enabled 00:02:25.495 =============== 00:02:25.495 00:02:25.495 common: 00:02:25.495 00:02:25.495 bus: 00:02:25.495 pci, vdev, 00:02:25.495 mempool: 00:02:25.495 ring, 00:02:25.495 dma: 00:02:25.495 00:02:25.495 net: 00:02:25.495 00:02:25.495 crypto: 00:02:25.495 00:02:25.495 compress: 00:02:25.495 00:02:25.495 vdpa: 00:02:25.495 00:02:25.495 00:02:25.495 Message: 00:02:25.495 ================= 00:02:25.495 Content Skipped 00:02:25.495 ================= 00:02:25.495 00:02:25.495 apps: 00:02:25.495 dumpcap: explicitly disabled via build config 00:02:25.495 graph: explicitly disabled via build config 00:02:25.495 pdump: explicitly disabled via build config 00:02:25.495 proc-info: explicitly disabled via build config 00:02:25.495 test-acl: explicitly disabled via build config 00:02:25.495 test-bbdev: explicitly disabled via build config 00:02:25.495 test-cmdline: explicitly disabled via build config 00:02:25.495 test-compress-perf: explicitly disabled via build config 00:02:25.495 test-crypto-perf: explicitly disabled via build config 00:02:25.495 test-dma-perf: explicitly disabled via build config 00:02:25.495 test-eventdev: explicitly disabled via build config 00:02:25.495 test-fib: explicitly disabled via build config 00:02:25.495 test-flow-perf: explicitly disabled via build config 00:02:25.495 test-gpudev: explicitly disabled via build config 00:02:25.495 test-mldev: explicitly disabled via build config 00:02:25.495 test-pipeline: explicitly disabled via build config 00:02:25.495 test-pmd: explicitly disabled via build config 00:02:25.495 test-regex: explicitly disabled via build config 00:02:25.495 test-sad: explicitly disabled via build config 00:02:25.495 test-security-perf: explicitly disabled via build config 00:02:25.495 00:02:25.495 libs: 00:02:25.495 argparse: explicitly disabled via build config 00:02:25.495 metrics: explicitly disabled via build config 00:02:25.495 acl: explicitly disabled via build config 00:02:25.495 bbdev: explicitly disabled via build config 00:02:25.495 bitratestats: explicitly disabled via build config 00:02:25.495 bpf: explicitly disabled via build config 00:02:25.495 cfgfile: explicitly disabled via build config 00:02:25.495 distributor: explicitly disabled via build config 00:02:25.495 efd: explicitly disabled via build config 00:02:25.495 eventdev: explicitly disabled via build config 00:02:25.495 dispatcher: explicitly disabled via build config 00:02:25.495 gpudev: explicitly disabled via build config 00:02:25.495 gro: explicitly disabled via build config 00:02:25.495 gso: explicitly disabled via build config 00:02:25.495 ip_frag: explicitly disabled via build config 00:02:25.495 jobstats: explicitly disabled via build config 00:02:25.495 latencystats: explicitly disabled via build config 00:02:25.495 lpm: explicitly disabled via build config 00:02:25.495 member: explicitly disabled via build config 00:02:25.495 pcapng: explicitly disabled via build config 00:02:25.495 rawdev: explicitly disabled via build config 00:02:25.495 regexdev: explicitly disabled via build config 00:02:25.496 mldev: explicitly disabled via build config 00:02:25.496 rib: explicitly disabled via build config 00:02:25.496 sched: explicitly disabled via build config 00:02:25.496 stack: explicitly disabled via build config 00:02:25.496 ipsec: explicitly disabled via build config 00:02:25.496 pdcp: explicitly disabled via build config 00:02:25.496 fib: explicitly disabled via build config 00:02:25.496 port: explicitly disabled via build config 00:02:25.496 pdump: explicitly disabled via build config 00:02:25.496 table: explicitly disabled via build config 00:02:25.496 pipeline: explicitly disabled via build config 00:02:25.496 graph: explicitly disabled via build config 00:02:25.496 node: explicitly disabled via build config 00:02:25.496 00:02:25.496 drivers: 00:02:25.496 common/cpt: not in enabled drivers build config 00:02:25.496 common/dpaax: not in enabled drivers build config 00:02:25.496 common/iavf: not in enabled drivers build config 00:02:25.496 common/idpf: not in enabled drivers build config 00:02:25.496 common/ionic: not in enabled drivers build config 00:02:25.496 common/mvep: not in enabled drivers build config 00:02:25.496 common/octeontx: not in enabled drivers build config 00:02:25.496 bus/auxiliary: not in enabled drivers build config 00:02:25.496 bus/cdx: not in enabled drivers build config 00:02:25.496 bus/dpaa: not in enabled drivers build config 00:02:25.496 bus/fslmc: not in enabled drivers build config 00:02:25.496 bus/ifpga: not in enabled drivers build config 00:02:25.496 bus/platform: not in enabled drivers build config 00:02:25.496 bus/uacce: not in enabled drivers build config 00:02:25.496 bus/vmbus: not in enabled drivers build config 00:02:25.496 common/cnxk: not in enabled drivers build config 00:02:25.496 common/mlx5: not in enabled drivers build config 00:02:25.496 common/nfp: not in enabled drivers build config 00:02:25.496 common/nitrox: not in enabled drivers build config 00:02:25.496 common/qat: not in enabled drivers build config 00:02:25.496 common/sfc_efx: not in enabled drivers build config 00:02:25.496 mempool/bucket: not in enabled drivers build config 00:02:25.496 mempool/cnxk: not in enabled drivers build config 00:02:25.496 mempool/dpaa: not in enabled drivers build config 00:02:25.496 mempool/dpaa2: not in enabled drivers build config 00:02:25.496 mempool/octeontx: not in enabled drivers build config 00:02:25.496 mempool/stack: not in enabled drivers build config 00:02:25.496 dma/cnxk: not in enabled drivers build config 00:02:25.496 dma/dpaa: not in enabled drivers build config 00:02:25.496 dma/dpaa2: not in enabled drivers build config 00:02:25.496 dma/hisilicon: not in enabled drivers build config 00:02:25.496 dma/idxd: not in enabled drivers build config 00:02:25.496 dma/ioat: not in enabled drivers build config 00:02:25.496 dma/skeleton: not in enabled drivers build config 00:02:25.496 net/af_packet: not in enabled drivers build config 00:02:25.496 net/af_xdp: not in enabled drivers build config 00:02:25.496 net/ark: not in enabled drivers build config 00:02:25.496 net/atlantic: not in enabled drivers build config 00:02:25.496 net/avp: not in enabled drivers build config 00:02:25.496 net/axgbe: not in enabled drivers build config 00:02:25.496 net/bnx2x: not in enabled drivers build config 00:02:25.496 net/bnxt: not in enabled drivers build config 00:02:25.496 net/bonding: not in enabled drivers build config 00:02:25.496 net/cnxk: not in enabled drivers build config 00:02:25.496 net/cpfl: not in enabled drivers build config 00:02:25.496 net/cxgbe: not in enabled drivers build config 00:02:25.496 net/dpaa: not in enabled drivers build config 00:02:25.496 net/dpaa2: not in enabled drivers build config 00:02:25.496 net/e1000: not in enabled drivers build config 00:02:25.496 net/ena: not in enabled drivers build config 00:02:25.496 net/enetc: not in enabled drivers build config 00:02:25.496 net/enetfec: not in enabled drivers build config 00:02:25.496 net/enic: not in enabled drivers build config 00:02:25.496 net/failsafe: not in enabled drivers build config 00:02:25.496 net/fm10k: not in enabled drivers build config 00:02:25.496 net/gve: not in enabled drivers build config 00:02:25.496 net/hinic: not in enabled drivers build config 00:02:25.496 net/hns3: not in enabled drivers build config 00:02:25.496 net/i40e: not in enabled drivers build config 00:02:25.496 net/iavf: not in enabled drivers build config 00:02:25.496 net/ice: not in enabled drivers build config 00:02:25.496 net/idpf: not in enabled drivers build config 00:02:25.496 net/igc: not in enabled drivers build config 00:02:25.496 net/ionic: not in enabled drivers build config 00:02:25.496 net/ipn3ke: not in enabled drivers build config 00:02:25.496 net/ixgbe: not in enabled drivers build config 00:02:25.496 net/mana: not in enabled drivers build config 00:02:25.496 net/memif: not in enabled drivers build config 00:02:25.496 net/mlx4: not in enabled drivers build config 00:02:25.496 net/mlx5: not in enabled drivers build config 00:02:25.496 net/mvneta: not in enabled drivers build config 00:02:25.496 net/mvpp2: not in enabled drivers build config 00:02:25.496 net/netvsc: not in enabled drivers build config 00:02:25.496 net/nfb: not in enabled drivers build config 00:02:25.496 net/nfp: not in enabled drivers build config 00:02:25.496 net/ngbe: not in enabled drivers build config 00:02:25.496 net/null: not in enabled drivers build config 00:02:25.496 net/octeontx: not in enabled drivers build config 00:02:25.496 net/octeon_ep: not in enabled drivers build config 00:02:25.496 net/pcap: not in enabled drivers build config 00:02:25.496 net/pfe: not in enabled drivers build config 00:02:25.496 net/qede: not in enabled drivers build config 00:02:25.496 net/ring: not in enabled drivers build config 00:02:25.496 net/sfc: not in enabled drivers build config 00:02:25.496 net/softnic: not in enabled drivers build config 00:02:25.496 net/tap: not in enabled drivers build config 00:02:25.496 net/thunderx: not in enabled drivers build config 00:02:25.496 net/txgbe: not in enabled drivers build config 00:02:25.496 net/vdev_netvsc: not in enabled drivers build config 00:02:25.496 net/vhost: not in enabled drivers build config 00:02:25.496 net/virtio: not in enabled drivers build config 00:02:25.496 net/vmxnet3: not in enabled drivers build config 00:02:25.496 raw/*: missing internal dependency, "rawdev" 00:02:25.496 crypto/armv8: not in enabled drivers build config 00:02:25.496 crypto/bcmfs: not in enabled drivers build config 00:02:25.496 crypto/caam_jr: not in enabled drivers build config 00:02:25.496 crypto/ccp: not in enabled drivers build config 00:02:25.496 crypto/cnxk: not in enabled drivers build config 00:02:25.496 crypto/dpaa_sec: not in enabled drivers build config 00:02:25.496 crypto/dpaa2_sec: not in enabled drivers build config 00:02:25.496 crypto/ipsec_mb: not in enabled drivers build config 00:02:25.496 crypto/mlx5: not in enabled drivers build config 00:02:25.496 crypto/mvsam: not in enabled drivers build config 00:02:25.496 crypto/nitrox: not in enabled drivers build config 00:02:25.496 crypto/null: not in enabled drivers build config 00:02:25.496 crypto/octeontx: not in enabled drivers build config 00:02:25.496 crypto/openssl: not in enabled drivers build config 00:02:25.496 crypto/scheduler: not in enabled drivers build config 00:02:25.496 crypto/uadk: not in enabled drivers build config 00:02:25.496 crypto/virtio: not in enabled drivers build config 00:02:25.496 compress/isal: not in enabled drivers build config 00:02:25.496 compress/mlx5: not in enabled drivers build config 00:02:25.496 compress/nitrox: not in enabled drivers build config 00:02:25.496 compress/octeontx: not in enabled drivers build config 00:02:25.496 compress/zlib: not in enabled drivers build config 00:02:25.496 regex/*: missing internal dependency, "regexdev" 00:02:25.496 ml/*: missing internal dependency, "mldev" 00:02:25.496 vdpa/ifc: not in enabled drivers build config 00:02:25.496 vdpa/mlx5: not in enabled drivers build config 00:02:25.496 vdpa/nfp: not in enabled drivers build config 00:02:25.496 vdpa/sfc: not in enabled drivers build config 00:02:25.496 event/*: missing internal dependency, "eventdev" 00:02:25.496 baseband/*: missing internal dependency, "bbdev" 00:02:25.496 gpu/*: missing internal dependency, "gpudev" 00:02:25.496 00:02:25.496 00:02:25.496 Build targets in project: 84 00:02:25.496 00:02:25.496 DPDK 24.03.0 00:02:25.496 00:02:25.496 User defined options 00:02:25.496 buildtype : debug 00:02:25.496 default_library : shared 00:02:25.496 libdir : lib 00:02:25.496 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:25.496 b_sanitize : address 00:02:25.496 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:25.496 c_link_args : 00:02:25.496 cpu_instruction_set: native 00:02:25.496 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:25.496 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:25.496 enable_docs : false 00:02:25.496 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:25.496 enable_kmods : false 00:02:25.496 max_lcores : 128 00:02:25.496 tests : false 00:02:25.496 00:02:25.496 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.496 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:25.496 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:25.496 [2/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:25.496 [3/267] Linking static target lib/librte_log.a 00:02:25.496 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:25.496 [5/267] Linking static target lib/librte_kvargs.a 00:02:25.496 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:25.496 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:25.496 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:25.496 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:25.496 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:25.496 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:25.497 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:25.497 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.497 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:25.497 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.497 [16/267] Linking static target lib/librte_telemetry.a 00:02:25.497 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:25.758 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.758 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.758 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.758 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:25.758 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.758 [23/267] Linking target lib/librte_log.so.24.1 00:02:26.020 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.020 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.020 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.020 [27/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:26.020 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:26.020 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.020 [30/267] Linking target lib/librte_kvargs.so.24.1 00:02:26.281 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:26.281 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:26.281 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.281 [34/267] Linking target lib/librte_telemetry.so.24.1 00:02:26.281 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:26.281 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:26.281 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:26.281 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:26.281 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:26.281 [40/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:26.281 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:26.543 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:26.543 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:26.543 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:26.543 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:26.543 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.805 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:26.805 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.805 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:26.805 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.805 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.805 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:27.066 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:27.066 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:27.066 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:27.066 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:27.066 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:27.066 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:27.066 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:27.066 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:27.066 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:27.327 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:27.327 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:27.327 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.327 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:27.327 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:27.589 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.589 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.589 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.589 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.589 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.589 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.589 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.589 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.589 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.589 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.850 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:27.850 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.850 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.850 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.850 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.850 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.111 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.111 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:28.111 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.111 [86/267] Linking static target lib/librte_ring.a 00:02:28.111 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.372 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.372 [89/267] Linking static target lib/librte_eal.a 00:02:28.372 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.372 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:28.372 [92/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:28.372 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:28.372 [94/267] Linking static target lib/librte_rcu.a 00:02:28.633 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.633 [96/267] Linking static target lib/librte_mempool.a 00:02:28.633 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.633 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:28.633 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:28.895 [100/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.895 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:28.895 [102/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:28.895 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.895 [104/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:29.157 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:29.157 [106/267] Linking static target lib/librte_net.a 00:02:29.157 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:29.157 [108/267] Linking static target lib/librte_meter.a 00:02:29.157 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:29.157 [110/267] Linking static target lib/librte_mbuf.a 00:02:29.157 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:29.157 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:29.416 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:29.416 [114/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.416 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:29.416 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.416 [117/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.676 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:29.676 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:29.936 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:29.936 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:29.936 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.936 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:30.196 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:30.196 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:30.196 [126/267] Linking static target lib/librte_pci.a 00:02:30.196 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:30.196 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:30.196 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:30.196 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:30.196 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:30.196 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:30.196 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:30.456 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:30.456 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:30.456 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:30.456 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:30.456 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.457 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:30.457 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:30.457 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:30.457 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:30.457 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:30.717 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:30.717 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:30.717 [146/267] Linking static target lib/librte_cmdline.a 00:02:30.717 [147/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:30.717 [148/267] Linking static target lib/librte_timer.a 00:02:30.717 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:30.717 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:30.976 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:30.976 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:30.976 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:31.235 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:31.235 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:31.235 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:31.235 [157/267] Linking static target lib/librte_hash.a 00:02:31.235 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:31.235 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:31.235 [160/267] Linking static target lib/librte_compressdev.a 00:02:31.235 [161/267] Linking static target lib/librte_ethdev.a 00:02:31.235 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.495 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:31.495 [164/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:31.495 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:31.495 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:31.495 [167/267] Linking static target lib/librte_dmadev.a 00:02:31.495 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:31.754 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:31.754 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:31.754 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:31.754 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.012 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:32.012 [174/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.012 [175/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.012 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:32.012 [177/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:32.012 [178/267] Linking static target lib/librte_cryptodev.a 00:02:32.012 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:32.012 [180/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:32.012 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.270 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:32.270 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:32.270 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:32.270 [185/267] Linking static target lib/librte_power.a 00:02:32.531 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:32.531 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:32.531 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:32.791 [189/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:32.791 [190/267] Linking static target lib/librte_reorder.a 00:02:32.791 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:32.791 [192/267] Linking static target lib/librte_security.a 00:02:32.791 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:33.052 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:33.052 [195/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.313 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:33.314 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.314 [198/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.314 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:33.576 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:33.576 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:33.576 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:33.576 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:33.576 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:33.839 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:33.839 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:33.839 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:33.839 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:33.839 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:33.839 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.101 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:34.101 [212/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:34.101 [213/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:34.101 [214/267] Linking static target drivers/librte_bus_pci.a 00:02:34.101 [215/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:34.101 [216/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:34.101 [217/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:34.101 [218/267] Linking static target drivers/librte_bus_vdev.a 00:02:34.101 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:34.101 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:34.360 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:34.360 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:34.360 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:34.360 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:34.360 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.621 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.881 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.824 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.824 [229/267] Linking target lib/librte_eal.so.24.1 00:02:35.824 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:35.824 [231/267] Linking target lib/librte_pci.so.24.1 00:02:35.824 [232/267] Linking target lib/librte_timer.so.24.1 00:02:35.824 [233/267] Linking target lib/librte_meter.so.24.1 00:02:35.824 [234/267] Linking target lib/librte_ring.so.24.1 00:02:35.824 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:35.824 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:36.085 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:36.085 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:36.085 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:36.085 [240/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:36.085 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:36.085 [242/267] Linking target lib/librte_mempool.so.24.1 00:02:36.085 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:36.085 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:36.085 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:36.085 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:36.085 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:36.085 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:36.347 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:36.347 [250/267] Linking target lib/librte_net.so.24.1 00:02:36.347 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:36.347 [252/267] Linking target lib/librte_compressdev.so.24.1 00:02:36.347 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:36.347 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:36.347 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:36.347 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:36.347 [257/267] Linking target lib/librte_hash.so.24.1 00:02:36.347 [258/267] Linking target lib/librte_security.so.24.1 00:02:36.609 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:36.609 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.871 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:36.871 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:36.871 [263/267] Linking target lib/librte_power.so.24.1 00:02:37.443 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:37.703 [265/267] Linking static target lib/librte_vhost.a 00:02:38.638 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.896 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:38.896 INFO: autodetecting backend as ninja 00:02:38.896 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:53.786 CC lib/ut_mock/mock.o 00:02:53.786 CC lib/log/log.o 00:02:53.786 CC lib/log/log_flags.o 00:02:53.786 CC lib/log/log_deprecated.o 00:02:53.786 CC lib/ut/ut.o 00:02:53.786 LIB libspdk_ut_mock.a 00:02:53.787 LIB libspdk_log.a 00:02:53.787 LIB libspdk_ut.a 00:02:53.787 SO libspdk_ut_mock.so.6.0 00:02:53.787 SO libspdk_log.so.7.1 00:02:53.787 SO libspdk_ut.so.2.0 00:02:53.787 SYMLINK libspdk_ut_mock.so 00:02:53.787 SYMLINK libspdk_ut.so 00:02:53.787 SYMLINK libspdk_log.so 00:02:53.787 CXX lib/trace_parser/trace.o 00:02:53.787 CC lib/util/base64.o 00:02:53.787 CC lib/util/bit_array.o 00:02:53.787 CC lib/dma/dma.o 00:02:53.787 CC lib/util/crc16.o 00:02:53.787 CC lib/util/cpuset.o 00:02:53.787 CC lib/util/crc32.o 00:02:53.787 CC lib/util/crc32c.o 00:02:53.787 CC lib/ioat/ioat.o 00:02:53.787 CC lib/vfio_user/host/vfio_user_pci.o 00:02:53.787 CC lib/util/crc32_ieee.o 00:02:53.787 CC lib/util/crc64.o 00:02:53.787 CC lib/vfio_user/host/vfio_user.o 00:02:53.787 CC lib/util/dif.o 00:02:53.787 LIB libspdk_dma.a 00:02:53.787 CC lib/util/fd.o 00:02:53.787 SO libspdk_dma.so.5.0 00:02:54.045 CC lib/util/fd_group.o 00:02:54.045 CC lib/util/file.o 00:02:54.045 CC lib/util/hexlify.o 00:02:54.045 SYMLINK libspdk_dma.so 00:02:54.045 CC lib/util/iov.o 00:02:54.045 LIB libspdk_ioat.a 00:02:54.045 SO libspdk_ioat.so.7.0 00:02:54.045 CC lib/util/math.o 00:02:54.045 CC lib/util/net.o 00:02:54.045 LIB libspdk_vfio_user.a 00:02:54.045 CC lib/util/pipe.o 00:02:54.045 CC lib/util/strerror_tls.o 00:02:54.045 SYMLINK libspdk_ioat.so 00:02:54.045 CC lib/util/string.o 00:02:54.045 SO libspdk_vfio_user.so.5.0 00:02:54.045 CC lib/util/uuid.o 00:02:54.045 SYMLINK libspdk_vfio_user.so 00:02:54.045 CC lib/util/xor.o 00:02:54.045 CC lib/util/zipf.o 00:02:54.045 CC lib/util/md5.o 00:02:54.613 LIB libspdk_util.a 00:02:54.613 LIB libspdk_trace_parser.a 00:02:54.613 SO libspdk_util.so.10.1 00:02:54.613 SO libspdk_trace_parser.so.6.0 00:02:54.613 SYMLINK libspdk_util.so 00:02:54.613 SYMLINK libspdk_trace_parser.so 00:02:54.871 CC lib/rdma_utils/rdma_utils.o 00:02:54.871 CC lib/json/json_parse.o 00:02:54.871 CC lib/json/json_write.o 00:02:54.871 CC lib/json/json_util.o 00:02:54.871 CC lib/env_dpdk/env.o 00:02:54.871 CC lib/env_dpdk/memory.o 00:02:54.871 CC lib/env_dpdk/pci.o 00:02:54.871 CC lib/conf/conf.o 00:02:54.871 CC lib/vmd/vmd.o 00:02:54.871 CC lib/idxd/idxd.o 00:02:55.130 LIB libspdk_conf.a 00:02:55.130 CC lib/idxd/idxd_user.o 00:02:55.130 CC lib/idxd/idxd_kernel.o 00:02:55.130 SO libspdk_conf.so.6.0 00:02:55.130 LIB libspdk_rdma_utils.a 00:02:55.130 SO libspdk_rdma_utils.so.1.0 00:02:55.130 LIB libspdk_json.a 00:02:55.130 SYMLINK libspdk_conf.so 00:02:55.130 CC lib/env_dpdk/init.o 00:02:55.130 SO libspdk_json.so.6.0 00:02:55.130 SYMLINK libspdk_rdma_utils.so 00:02:55.130 SYMLINK libspdk_json.so 00:02:55.130 CC lib/vmd/led.o 00:02:55.130 CC lib/env_dpdk/threads.o 00:02:55.130 CC lib/env_dpdk/pci_ioat.o 00:02:55.388 CC lib/rdma_provider/common.o 00:02:55.388 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:55.388 CC lib/jsonrpc/jsonrpc_server.o 00:02:55.388 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:55.388 CC lib/env_dpdk/pci_virtio.o 00:02:55.388 CC lib/env_dpdk/pci_vmd.o 00:02:55.388 LIB libspdk_idxd.a 00:02:55.388 CC lib/env_dpdk/pci_idxd.o 00:02:55.388 CC lib/env_dpdk/pci_event.o 00:02:55.388 CC lib/env_dpdk/sigbus_handler.o 00:02:55.388 LIB libspdk_vmd.a 00:02:55.389 SO libspdk_idxd.so.12.1 00:02:55.389 LIB libspdk_rdma_provider.a 00:02:55.389 SO libspdk_vmd.so.6.0 00:02:55.389 SO libspdk_rdma_provider.so.7.0 00:02:55.649 CC lib/jsonrpc/jsonrpc_client.o 00:02:55.649 SYMLINK libspdk_idxd.so 00:02:55.649 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.649 CC lib/env_dpdk/pci_dpdk.o 00:02:55.649 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.649 SYMLINK libspdk_vmd.so 00:02:55.649 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.649 SYMLINK libspdk_rdma_provider.so 00:02:55.649 LIB libspdk_jsonrpc.a 00:02:55.649 SO libspdk_jsonrpc.so.6.0 00:02:55.910 SYMLINK libspdk_jsonrpc.so 00:02:56.171 CC lib/rpc/rpc.o 00:02:56.171 LIB libspdk_env_dpdk.a 00:02:56.171 LIB libspdk_rpc.a 00:02:56.171 SO libspdk_env_dpdk.so.15.1 00:02:56.171 SO libspdk_rpc.so.6.0 00:02:56.431 SYMLINK libspdk_rpc.so 00:02:56.431 SYMLINK libspdk_env_dpdk.so 00:02:56.431 CC lib/keyring/keyring.o 00:02:56.431 CC lib/keyring/keyring_rpc.o 00:02:56.432 CC lib/notify/notify.o 00:02:56.432 CC lib/notify/notify_rpc.o 00:02:56.432 CC lib/trace/trace.o 00:02:56.432 CC lib/trace/trace_flags.o 00:02:56.432 CC lib/trace/trace_rpc.o 00:02:56.692 LIB libspdk_notify.a 00:02:56.692 SO libspdk_notify.so.6.0 00:02:56.692 LIB libspdk_trace.a 00:02:56.692 SYMLINK libspdk_notify.so 00:02:56.692 LIB libspdk_keyring.a 00:02:56.692 SO libspdk_trace.so.11.0 00:02:56.692 SO libspdk_keyring.so.2.0 00:02:56.692 SYMLINK libspdk_trace.so 00:02:56.692 SYMLINK libspdk_keyring.so 00:02:56.951 CC lib/sock/sock.o 00:02:56.951 CC lib/sock/sock_rpc.o 00:02:56.951 CC lib/thread/thread.o 00:02:56.951 CC lib/thread/iobuf.o 00:02:57.212 LIB libspdk_sock.a 00:02:57.473 SO libspdk_sock.so.10.0 00:02:57.473 SYMLINK libspdk_sock.so 00:02:57.736 CC lib/nvme/nvme_ns_cmd.o 00:02:57.736 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:57.736 CC lib/nvme/nvme_fabric.o 00:02:57.736 CC lib/nvme/nvme_ctrlr.o 00:02:57.736 CC lib/nvme/nvme_ns.o 00:02:57.736 CC lib/nvme/nvme_pcie_common.o 00:02:57.736 CC lib/nvme/nvme_pcie.o 00:02:57.736 CC lib/nvme/nvme.o 00:02:57.736 CC lib/nvme/nvme_qpair.o 00:02:58.309 CC lib/nvme/nvme_quirks.o 00:02:58.309 CC lib/nvme/nvme_transport.o 00:02:58.309 CC lib/nvme/nvme_discovery.o 00:02:58.309 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:58.309 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:58.570 CC lib/nvme/nvme_tcp.o 00:02:58.570 LIB libspdk_thread.a 00:02:58.570 CC lib/nvme/nvme_opal.o 00:02:58.571 SO libspdk_thread.so.11.0 00:02:58.571 CC lib/nvme/nvme_io_msg.o 00:02:58.571 SYMLINK libspdk_thread.so 00:02:58.571 CC lib/accel/accel.o 00:02:58.832 CC lib/accel/accel_rpc.o 00:02:58.832 CC lib/accel/accel_sw.o 00:02:58.832 CC lib/nvme/nvme_poll_group.o 00:02:58.832 CC lib/nvme/nvme_zns.o 00:02:58.832 CC lib/nvme/nvme_stubs.o 00:02:59.093 CC lib/nvme/nvme_auth.o 00:02:59.093 CC lib/nvme/nvme_cuse.o 00:02:59.093 CC lib/nvme/nvme_rdma.o 00:02:59.093 CC lib/blob/blobstore.o 00:02:59.354 CC lib/blob/request.o 00:02:59.355 CC lib/blob/zeroes.o 00:02:59.616 CC lib/init/json_config.o 00:02:59.616 CC lib/blob/blob_bs_dev.o 00:02:59.616 LIB libspdk_accel.a 00:02:59.616 SO libspdk_accel.so.16.0 00:02:59.616 SYMLINK libspdk_accel.so 00:02:59.877 CC lib/init/subsystem.o 00:02:59.877 CC lib/init/subsystem_rpc.o 00:02:59.877 CC lib/init/rpc.o 00:02:59.877 CC lib/virtio/virtio.o 00:02:59.877 CC lib/virtio/virtio_vhost_user.o 00:02:59.877 CC lib/fsdev/fsdev.o 00:02:59.877 CC lib/virtio/virtio_vfio_user.o 00:02:59.877 CC lib/fsdev/fsdev_io.o 00:02:59.877 CC lib/virtio/virtio_pci.o 00:02:59.877 LIB libspdk_init.a 00:03:00.138 CC lib/bdev/bdev.o 00:03:00.138 SO libspdk_init.so.6.0 00:03:00.138 SYMLINK libspdk_init.so 00:03:00.138 CC lib/bdev/bdev_rpc.o 00:03:00.138 CC lib/bdev/bdev_zone.o 00:03:00.138 CC lib/bdev/part.o 00:03:00.138 CC lib/bdev/scsi_nvme.o 00:03:00.138 LIB libspdk_virtio.a 00:03:00.138 SO libspdk_virtio.so.7.0 00:03:00.399 CC lib/fsdev/fsdev_rpc.o 00:03:00.399 SYMLINK libspdk_virtio.so 00:03:00.399 LIB libspdk_fsdev.a 00:03:00.399 SO libspdk_fsdev.so.2.0 00:03:00.399 CC lib/event/app.o 00:03:00.399 CC lib/event/reactor.o 00:03:00.399 CC lib/event/log_rpc.o 00:03:00.399 CC lib/event/app_rpc.o 00:03:00.399 CC lib/event/scheduler_static.o 00:03:00.399 LIB libspdk_nvme.a 00:03:00.399 SYMLINK libspdk_fsdev.so 00:03:00.660 SO libspdk_nvme.so.15.0 00:03:00.660 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:00.920 SYMLINK libspdk_nvme.so 00:03:00.921 LIB libspdk_event.a 00:03:00.921 SO libspdk_event.so.14.0 00:03:01.182 SYMLINK libspdk_event.so 00:03:01.445 LIB libspdk_fuse_dispatcher.a 00:03:01.445 SO libspdk_fuse_dispatcher.so.1.0 00:03:01.445 SYMLINK libspdk_fuse_dispatcher.so 00:03:02.440 LIB libspdk_blob.a 00:03:02.726 SO libspdk_blob.so.12.0 00:03:02.726 LIB libspdk_bdev.a 00:03:02.726 SYMLINK libspdk_blob.so 00:03:02.726 SO libspdk_bdev.so.17.0 00:03:03.020 SYMLINK libspdk_bdev.so 00:03:03.020 CC lib/lvol/lvol.o 00:03:03.020 CC lib/blobfs/tree.o 00:03:03.020 CC lib/blobfs/blobfs.o 00:03:03.020 CC lib/nbd/nbd.o 00:03:03.020 CC lib/nbd/nbd_rpc.o 00:03:03.020 CC lib/nvmf/ctrlr.o 00:03:03.020 CC lib/nvmf/ctrlr_discovery.o 00:03:03.020 CC lib/ublk/ublk.o 00:03:03.020 CC lib/ftl/ftl_core.o 00:03:03.020 CC lib/scsi/dev.o 00:03:03.020 CC lib/scsi/lun.o 00:03:03.310 CC lib/ftl/ftl_init.o 00:03:03.310 CC lib/scsi/port.o 00:03:03.310 CC lib/scsi/scsi.o 00:03:03.310 CC lib/ftl/ftl_layout.o 00:03:03.310 CC lib/ftl/ftl_debug.o 00:03:03.310 CC lib/scsi/scsi_bdev.o 00:03:03.310 LIB libspdk_nbd.a 00:03:03.310 SO libspdk_nbd.so.7.0 00:03:03.614 CC lib/ublk/ublk_rpc.o 00:03:03.614 CC lib/nvmf/ctrlr_bdev.o 00:03:03.614 SYMLINK libspdk_nbd.so 00:03:03.614 CC lib/ftl/ftl_io.o 00:03:03.614 CC lib/nvmf/subsystem.o 00:03:03.614 CC lib/scsi/scsi_pr.o 00:03:03.614 LIB libspdk_lvol.a 00:03:03.614 LIB libspdk_ublk.a 00:03:03.614 SO libspdk_lvol.so.11.0 00:03:03.614 CC lib/scsi/scsi_rpc.o 00:03:03.614 SO libspdk_ublk.so.3.0 00:03:03.614 SYMLINK libspdk_lvol.so 00:03:03.614 CC lib/nvmf/nvmf.o 00:03:03.614 CC lib/ftl/ftl_sb.o 00:03:03.614 SYMLINK libspdk_ublk.so 00:03:03.614 CC lib/ftl/ftl_l2p.o 00:03:03.905 LIB libspdk_blobfs.a 00:03:03.905 SO libspdk_blobfs.so.11.0 00:03:03.905 CC lib/scsi/task.o 00:03:03.905 SYMLINK libspdk_blobfs.so 00:03:03.905 CC lib/nvmf/nvmf_rpc.o 00:03:03.905 CC lib/nvmf/transport.o 00:03:03.905 CC lib/ftl/ftl_l2p_flat.o 00:03:03.905 CC lib/ftl/ftl_nv_cache.o 00:03:03.905 CC lib/nvmf/tcp.o 00:03:03.905 LIB libspdk_scsi.a 00:03:03.905 SO libspdk_scsi.so.9.0 00:03:03.905 CC lib/ftl/ftl_band.o 00:03:04.182 SYMLINK libspdk_scsi.so 00:03:04.182 CC lib/ftl/ftl_band_ops.o 00:03:04.182 CC lib/ftl/ftl_writer.o 00:03:04.442 CC lib/ftl/ftl_rq.o 00:03:04.442 CC lib/ftl/ftl_reloc.o 00:03:04.442 CC lib/nvmf/stubs.o 00:03:04.442 CC lib/ftl/ftl_l2p_cache.o 00:03:04.442 CC lib/iscsi/conn.o 00:03:04.703 CC lib/iscsi/init_grp.o 00:03:04.703 CC lib/ftl/ftl_p2l.o 00:03:04.703 CC lib/iscsi/iscsi.o 00:03:04.703 CC lib/nvmf/mdns_server.o 00:03:04.703 CC lib/nvmf/rdma.o 00:03:04.963 CC lib/nvmf/auth.o 00:03:04.963 CC lib/vhost/vhost.o 00:03:04.963 CC lib/vhost/vhost_rpc.o 00:03:04.963 CC lib/ftl/ftl_p2l_log.o 00:03:04.963 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.963 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:05.224 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:05.224 CC lib/vhost/vhost_scsi.o 00:03:05.224 CC lib/iscsi/param.o 00:03:05.224 CC lib/iscsi/portal_grp.o 00:03:05.224 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.485 CC lib/iscsi/tgt_node.o 00:03:05.485 CC lib/iscsi/iscsi_subsystem.o 00:03:05.485 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.485 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:05.485 CC lib/iscsi/iscsi_rpc.o 00:03:05.485 CC lib/iscsi/task.o 00:03:05.746 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:05.746 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.746 CC lib/vhost/vhost_blk.o 00:03:05.746 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.746 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.746 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.746 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.746 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.746 CC lib/ftl/utils/ftl_conf.o 00:03:06.006 LIB libspdk_iscsi.a 00:03:06.006 SO libspdk_iscsi.so.8.0 00:03:06.006 CC lib/vhost/rte_vhost_user.o 00:03:06.006 CC lib/ftl/utils/ftl_md.o 00:03:06.006 CC lib/ftl/utils/ftl_bitmap.o 00:03:06.006 CC lib/ftl/utils/ftl_mempool.o 00:03:06.006 CC lib/ftl/utils/ftl_property.o 00:03:06.006 SYMLINK libspdk_iscsi.so 00:03:06.006 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:06.006 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:06.006 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:06.006 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:06.267 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:06.267 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:06.267 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:06.267 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:06.267 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:06.267 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:06.267 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:06.267 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:06.267 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:06.529 CC lib/ftl/base/ftl_base_dev.o 00:03:06.529 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.529 CC lib/ftl/ftl_trace.o 00:03:06.529 LIB libspdk_ftl.a 00:03:06.789 LIB libspdk_vhost.a 00:03:06.789 SO libspdk_ftl.so.9.0 00:03:06.789 SO libspdk_vhost.so.8.0 00:03:06.789 LIB libspdk_nvmf.a 00:03:06.789 SYMLINK libspdk_vhost.so 00:03:07.048 SO libspdk_nvmf.so.20.0 00:03:07.048 SYMLINK libspdk_ftl.so 00:03:07.048 SYMLINK libspdk_nvmf.so 00:03:07.309 CC module/env_dpdk/env_dpdk_rpc.o 00:03:07.569 CC module/accel/dsa/accel_dsa.o 00:03:07.569 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:07.569 CC module/sock/posix/posix.o 00:03:07.569 CC module/keyring/file/keyring.o 00:03:07.569 CC module/accel/ioat/accel_ioat.o 00:03:07.569 CC module/keyring/linux/keyring.o 00:03:07.569 CC module/fsdev/aio/fsdev_aio.o 00:03:07.569 CC module/accel/error/accel_error.o 00:03:07.569 CC module/blob/bdev/blob_bdev.o 00:03:07.569 LIB libspdk_env_dpdk_rpc.a 00:03:07.569 SO libspdk_env_dpdk_rpc.so.6.0 00:03:07.569 SYMLINK libspdk_env_dpdk_rpc.so 00:03:07.569 CC module/accel/error/accel_error_rpc.o 00:03:07.569 CC module/keyring/linux/keyring_rpc.o 00:03:07.569 CC module/accel/ioat/accel_ioat_rpc.o 00:03:07.569 CC module/keyring/file/keyring_rpc.o 00:03:07.569 LIB libspdk_scheduler_dynamic.a 00:03:07.569 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:07.569 LIB libspdk_accel_error.a 00:03:07.829 LIB libspdk_keyring_file.a 00:03:07.829 SO libspdk_scheduler_dynamic.so.4.0 00:03:07.829 SO libspdk_accel_error.so.2.0 00:03:07.829 LIB libspdk_blob_bdev.a 00:03:07.829 CC module/accel/dsa/accel_dsa_rpc.o 00:03:07.829 LIB libspdk_accel_ioat.a 00:03:07.829 SO libspdk_keyring_file.so.2.0 00:03:07.829 SO libspdk_blob_bdev.so.12.0 00:03:07.829 SO libspdk_accel_ioat.so.6.0 00:03:07.829 LIB libspdk_keyring_linux.a 00:03:07.829 SYMLINK libspdk_scheduler_dynamic.so 00:03:07.829 SYMLINK libspdk_accel_error.so 00:03:07.829 SYMLINK libspdk_blob_bdev.so 00:03:07.829 SO libspdk_keyring_linux.so.1.0 00:03:07.829 SYMLINK libspdk_keyring_file.so 00:03:07.829 CC module/fsdev/aio/linux_aio_mgr.o 00:03:07.829 SYMLINK libspdk_accel_ioat.so 00:03:07.829 SYMLINK libspdk_keyring_linux.so 00:03:07.829 LIB libspdk_accel_dsa.a 00:03:07.829 SO libspdk_accel_dsa.so.5.0 00:03:07.829 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:07.829 CC module/accel/iaa/accel_iaa.o 00:03:07.829 SYMLINK libspdk_accel_dsa.so 00:03:07.829 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.088 CC module/bdev/delay/vbdev_delay.o 00:03:08.088 CC module/bdev/error/vbdev_error.o 00:03:08.088 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.088 CC module/bdev/gpt/gpt.o 00:03:08.088 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.088 LIB libspdk_scheduler_gscheduler.a 00:03:08.088 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.088 CC module/bdev/lvol/vbdev_lvol.o 00:03:08.088 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:08.088 SO libspdk_scheduler_gscheduler.so.4.0 00:03:08.088 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:08.088 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.088 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.088 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.088 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.088 LIB libspdk_fsdev_aio.a 00:03:08.088 LIB libspdk_accel_iaa.a 00:03:08.088 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:08.088 SO libspdk_fsdev_aio.so.1.0 00:03:08.088 LIB libspdk_sock_posix.a 00:03:08.347 SO libspdk_accel_iaa.so.3.0 00:03:08.347 CC module/bdev/gpt/vbdev_gpt.o 00:03:08.347 SO libspdk_sock_posix.so.6.0 00:03:08.347 LIB libspdk_blobfs_bdev.a 00:03:08.347 LIB libspdk_bdev_error.a 00:03:08.347 SYMLINK libspdk_fsdev_aio.so 00:03:08.347 SO libspdk_blobfs_bdev.so.6.0 00:03:08.347 SYMLINK libspdk_accel_iaa.so 00:03:08.347 SO libspdk_bdev_error.so.6.0 00:03:08.347 SYMLINK libspdk_sock_posix.so 00:03:08.347 LIB libspdk_bdev_delay.a 00:03:08.347 SYMLINK libspdk_blobfs_bdev.so 00:03:08.347 SYMLINK libspdk_bdev_error.so 00:03:08.347 SO libspdk_bdev_delay.so.6.0 00:03:08.347 SYMLINK libspdk_bdev_delay.so 00:03:08.347 CC module/bdev/null/bdev_null.o 00:03:08.347 CC module/bdev/malloc/bdev_malloc.o 00:03:08.347 CC module/bdev/passthru/vbdev_passthru.o 00:03:08.347 CC module/bdev/nvme/bdev_nvme.o 00:03:08.347 CC module/bdev/split/vbdev_split.o 00:03:08.347 CC module/bdev/raid/bdev_raid.o 00:03:08.609 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:08.609 LIB libspdk_bdev_gpt.a 00:03:08.609 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:08.609 SO libspdk_bdev_gpt.so.6.0 00:03:08.609 LIB libspdk_bdev_lvol.a 00:03:08.609 SYMLINK libspdk_bdev_gpt.so 00:03:08.609 SO libspdk_bdev_lvol.so.6.0 00:03:08.609 CC module/bdev/raid/bdev_raid_rpc.o 00:03:08.609 CC module/bdev/split/vbdev_split_rpc.o 00:03:08.609 CC module/bdev/null/bdev_null_rpc.o 00:03:08.609 SYMLINK libspdk_bdev_lvol.so 00:03:08.609 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:08.609 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:08.609 LIB libspdk_bdev_passthru.a 00:03:08.609 LIB libspdk_bdev_split.a 00:03:08.870 LIB libspdk_bdev_null.a 00:03:08.870 SO libspdk_bdev_split.so.6.0 00:03:08.870 SO libspdk_bdev_passthru.so.6.0 00:03:08.870 CC module/bdev/raid/bdev_raid_sb.o 00:03:08.870 SO libspdk_bdev_null.so.6.0 00:03:08.870 CC module/bdev/raid/raid0.o 00:03:08.870 SYMLINK libspdk_bdev_passthru.so 00:03:08.870 CC module/bdev/raid/raid1.o 00:03:08.870 SYMLINK libspdk_bdev_split.so 00:03:08.870 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:08.870 CC module/bdev/raid/concat.o 00:03:08.870 SYMLINK libspdk_bdev_null.so 00:03:08.870 CC module/bdev/nvme/nvme_rpc.o 00:03:08.870 LIB libspdk_bdev_zone_block.a 00:03:08.870 SO libspdk_bdev_zone_block.so.6.0 00:03:08.870 SYMLINK libspdk_bdev_zone_block.so 00:03:08.870 CC module/bdev/nvme/bdev_mdns_client.o 00:03:08.870 LIB libspdk_bdev_malloc.a 00:03:08.870 CC module/bdev/nvme/vbdev_opal.o 00:03:08.870 SO libspdk_bdev_malloc.so.6.0 00:03:08.870 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:08.870 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:09.131 SYMLINK libspdk_bdev_malloc.so 00:03:09.131 CC module/bdev/aio/bdev_aio.o 00:03:09.131 CC module/bdev/xnvme/bdev_xnvme.o 00:03:09.131 CC module/bdev/aio/bdev_aio_rpc.o 00:03:09.131 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:09.131 CC module/bdev/ftl/bdev_ftl.o 00:03:09.131 CC module/bdev/iscsi/bdev_iscsi.o 00:03:09.390 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:09.390 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:09.390 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:09.390 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:09.390 LIB libspdk_bdev_xnvme.a 00:03:09.390 SO libspdk_bdev_xnvme.so.3.0 00:03:09.390 LIB libspdk_bdev_aio.a 00:03:09.390 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:09.390 SO libspdk_bdev_aio.so.6.0 00:03:09.390 LIB libspdk_bdev_ftl.a 00:03:09.390 SYMLINK libspdk_bdev_xnvme.so 00:03:09.390 SO libspdk_bdev_ftl.so.6.0 00:03:09.390 SYMLINK libspdk_bdev_aio.so 00:03:09.390 LIB libspdk_bdev_raid.a 00:03:09.650 SYMLINK libspdk_bdev_ftl.so 00:03:09.650 LIB libspdk_bdev_iscsi.a 00:03:09.650 SO libspdk_bdev_raid.so.6.0 00:03:09.650 SO libspdk_bdev_iscsi.so.6.0 00:03:09.650 SYMLINK libspdk_bdev_iscsi.so 00:03:09.650 SYMLINK libspdk_bdev_raid.so 00:03:09.911 LIB libspdk_bdev_virtio.a 00:03:09.911 SO libspdk_bdev_virtio.so.6.0 00:03:09.911 SYMLINK libspdk_bdev_virtio.so 00:03:10.851 LIB libspdk_bdev_nvme.a 00:03:10.851 SO libspdk_bdev_nvme.so.7.1 00:03:11.111 SYMLINK libspdk_bdev_nvme.so 00:03:11.371 CC module/event/subsystems/sock/sock.o 00:03:11.371 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.371 CC module/event/subsystems/fsdev/fsdev.o 00:03:11.371 CC module/event/subsystems/keyring/keyring.o 00:03:11.371 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.371 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.371 CC module/event/subsystems/vmd/vmd.o 00:03:11.371 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.371 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.646 LIB libspdk_event_sock.a 00:03:11.646 LIB libspdk_event_vhost_blk.a 00:03:11.646 LIB libspdk_event_keyring.a 00:03:11.646 LIB libspdk_event_fsdev.a 00:03:11.646 SO libspdk_event_sock.so.5.0 00:03:11.646 LIB libspdk_event_scheduler.a 00:03:11.646 SO libspdk_event_vhost_blk.so.3.0 00:03:11.646 LIB libspdk_event_vmd.a 00:03:11.646 SO libspdk_event_keyring.so.1.0 00:03:11.646 SO libspdk_event_fsdev.so.1.0 00:03:11.646 LIB libspdk_event_iobuf.a 00:03:11.646 SO libspdk_event_scheduler.so.4.0 00:03:11.646 SO libspdk_event_vmd.so.6.0 00:03:11.646 SYMLINK libspdk_event_sock.so 00:03:11.646 SO libspdk_event_iobuf.so.3.0 00:03:11.646 SYMLINK libspdk_event_vhost_blk.so 00:03:11.646 SYMLINK libspdk_event_fsdev.so 00:03:11.646 SYMLINK libspdk_event_keyring.so 00:03:11.646 SYMLINK libspdk_event_scheduler.so 00:03:11.646 SYMLINK libspdk_event_vmd.so 00:03:11.646 SYMLINK libspdk_event_iobuf.so 00:03:11.931 CC module/event/subsystems/accel/accel.o 00:03:11.931 LIB libspdk_event_accel.a 00:03:11.931 SO libspdk_event_accel.so.6.0 00:03:11.931 SYMLINK libspdk_event_accel.so 00:03:12.201 CC module/event/subsystems/bdev/bdev.o 00:03:12.462 LIB libspdk_event_bdev.a 00:03:12.462 SO libspdk_event_bdev.so.6.0 00:03:12.462 SYMLINK libspdk_event_bdev.so 00:03:12.724 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.724 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.724 CC module/event/subsystems/ublk/ublk.o 00:03:12.724 CC module/event/subsystems/nbd/nbd.o 00:03:12.724 CC module/event/subsystems/scsi/scsi.o 00:03:12.724 LIB libspdk_event_ublk.a 00:03:12.724 LIB libspdk_event_nbd.a 00:03:12.724 LIB libspdk_event_scsi.a 00:03:12.724 SO libspdk_event_nbd.so.6.0 00:03:12.724 SO libspdk_event_ublk.so.3.0 00:03:12.724 SO libspdk_event_scsi.so.6.0 00:03:12.983 LIB libspdk_event_nvmf.a 00:03:12.983 SYMLINK libspdk_event_nbd.so 00:03:12.983 SYMLINK libspdk_event_ublk.so 00:03:12.983 SYMLINK libspdk_event_scsi.so 00:03:12.983 SO libspdk_event_nvmf.so.6.0 00:03:12.983 SYMLINK libspdk_event_nvmf.so 00:03:12.983 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:12.983 CC module/event/subsystems/iscsi/iscsi.o 00:03:13.241 LIB libspdk_event_vhost_scsi.a 00:03:13.241 LIB libspdk_event_iscsi.a 00:03:13.241 SO libspdk_event_vhost_scsi.so.3.0 00:03:13.241 SO libspdk_event_iscsi.so.6.0 00:03:13.241 SYMLINK libspdk_event_vhost_scsi.so 00:03:13.241 SYMLINK libspdk_event_iscsi.so 00:03:13.502 SO libspdk.so.6.0 00:03:13.502 SYMLINK libspdk.so 00:03:13.502 CC app/spdk_lspci/spdk_lspci.o 00:03:13.502 CC app/trace_record/trace_record.o 00:03:13.502 CC app/spdk_nvme_perf/perf.o 00:03:13.502 CXX app/trace/trace.o 00:03:13.502 CC app/nvmf_tgt/nvmf_main.o 00:03:13.502 CC app/iscsi_tgt/iscsi_tgt.o 00:03:13.502 CC examples/ioat/perf/perf.o 00:03:13.502 CC test/thread/poller_perf/poller_perf.o 00:03:13.762 CC app/spdk_tgt/spdk_tgt.o 00:03:13.762 CC examples/util/zipf/zipf.o 00:03:13.762 LINK spdk_lspci 00:03:13.762 LINK poller_perf 00:03:13.762 LINK spdk_trace_record 00:03:13.762 LINK zipf 00:03:13.763 LINK iscsi_tgt 00:03:13.763 LINK spdk_tgt 00:03:13.763 LINK ioat_perf 00:03:13.763 LINK nvmf_tgt 00:03:13.763 CC app/spdk_nvme_identify/identify.o 00:03:14.024 CC app/spdk_nvme_discover/discovery_aer.o 00:03:14.024 LINK spdk_trace 00:03:14.024 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.024 CC app/spdk_top/spdk_top.o 00:03:14.024 CC test/dma/test_dma/test_dma.o 00:03:14.024 CC examples/ioat/verify/verify.o 00:03:14.024 LINK spdk_nvme_discover 00:03:14.024 CC app/spdk_dd/spdk_dd.o 00:03:14.024 CC examples/thread/thread/thread_ex.o 00:03:14.024 LINK interrupt_tgt 00:03:14.283 CC examples/sock/hello_world/hello_sock.o 00:03:14.283 LINK verify 00:03:14.283 CC examples/vmd/lsvmd/lsvmd.o 00:03:14.283 LINK thread 00:03:14.283 CC examples/vmd/led/led.o 00:03:14.283 LINK spdk_nvme_perf 00:03:14.283 LINK test_dma 00:03:14.542 CC test/app/bdev_svc/bdev_svc.o 00:03:14.542 LINK hello_sock 00:03:14.542 LINK lsvmd 00:03:14.542 LINK spdk_dd 00:03:14.542 LINK led 00:03:14.542 LINK bdev_svc 00:03:14.542 TEST_HEADER include/spdk/accel.h 00:03:14.542 TEST_HEADER include/spdk/accel_module.h 00:03:14.542 TEST_HEADER include/spdk/assert.h 00:03:14.542 TEST_HEADER include/spdk/barrier.h 00:03:14.542 TEST_HEADER include/spdk/base64.h 00:03:14.542 TEST_HEADER include/spdk/bdev.h 00:03:14.542 TEST_HEADER include/spdk/bdev_module.h 00:03:14.542 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.542 TEST_HEADER include/spdk/bit_array.h 00:03:14.542 TEST_HEADER include/spdk/bit_pool.h 00:03:14.542 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.542 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.542 TEST_HEADER include/spdk/blobfs.h 00:03:14.542 TEST_HEADER include/spdk/blob.h 00:03:14.542 TEST_HEADER include/spdk/conf.h 00:03:14.542 TEST_HEADER include/spdk/config.h 00:03:14.542 TEST_HEADER include/spdk/cpuset.h 00:03:14.542 TEST_HEADER include/spdk/crc16.h 00:03:14.542 TEST_HEADER include/spdk/crc32.h 00:03:14.542 TEST_HEADER include/spdk/crc64.h 00:03:14.542 TEST_HEADER include/spdk/dif.h 00:03:14.542 TEST_HEADER include/spdk/dma.h 00:03:14.542 TEST_HEADER include/spdk/endian.h 00:03:14.542 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.542 TEST_HEADER include/spdk/env.h 00:03:14.542 LINK spdk_nvme_identify 00:03:14.542 TEST_HEADER include/spdk/event.h 00:03:14.542 TEST_HEADER include/spdk/fd_group.h 00:03:14.542 TEST_HEADER include/spdk/fd.h 00:03:14.542 TEST_HEADER include/spdk/file.h 00:03:14.542 TEST_HEADER include/spdk/fsdev.h 00:03:14.542 TEST_HEADER include/spdk/fsdev_module.h 00:03:14.542 TEST_HEADER include/spdk/ftl.h 00:03:14.542 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:14.542 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.542 TEST_HEADER include/spdk/hexlify.h 00:03:14.542 TEST_HEADER include/spdk/histogram_data.h 00:03:14.542 TEST_HEADER include/spdk/idxd.h 00:03:14.803 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.804 TEST_HEADER include/spdk/init.h 00:03:14.804 TEST_HEADER include/spdk/ioat.h 00:03:14.804 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.804 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.804 TEST_HEADER include/spdk/json.h 00:03:14.804 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.804 TEST_HEADER include/spdk/keyring.h 00:03:14.804 TEST_HEADER include/spdk/keyring_module.h 00:03:14.804 TEST_HEADER include/spdk/likely.h 00:03:14.804 TEST_HEADER include/spdk/log.h 00:03:14.804 TEST_HEADER include/spdk/lvol.h 00:03:14.804 TEST_HEADER include/spdk/md5.h 00:03:14.804 TEST_HEADER include/spdk/memory.h 00:03:14.804 TEST_HEADER include/spdk/mmio.h 00:03:14.804 TEST_HEADER include/spdk/nbd.h 00:03:14.804 TEST_HEADER include/spdk/net.h 00:03:14.804 TEST_HEADER include/spdk/notify.h 00:03:14.804 TEST_HEADER include/spdk/nvme.h 00:03:14.804 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.804 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.804 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.804 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.804 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.804 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.804 CC test/event/event_perf/event_perf.o 00:03:14.804 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.804 CC app/vhost/vhost.o 00:03:14.804 TEST_HEADER include/spdk/nvmf.h 00:03:14.804 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.804 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.804 TEST_HEADER include/spdk/opal.h 00:03:14.804 TEST_HEADER include/spdk/opal_spec.h 00:03:14.804 TEST_HEADER include/spdk/pci_ids.h 00:03:14.804 TEST_HEADER include/spdk/pipe.h 00:03:14.804 TEST_HEADER include/spdk/queue.h 00:03:14.804 CC app/fio/nvme/fio_plugin.o 00:03:14.804 TEST_HEADER include/spdk/reduce.h 00:03:14.804 CC test/env/vtophys/vtophys.o 00:03:14.804 TEST_HEADER include/spdk/rpc.h 00:03:14.804 TEST_HEADER include/spdk/scheduler.h 00:03:14.804 TEST_HEADER include/spdk/scsi.h 00:03:14.804 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.804 TEST_HEADER include/spdk/sock.h 00:03:14.804 TEST_HEADER include/spdk/stdinc.h 00:03:14.804 TEST_HEADER include/spdk/string.h 00:03:14.804 TEST_HEADER include/spdk/thread.h 00:03:14.804 TEST_HEADER include/spdk/trace.h 00:03:14.804 TEST_HEADER include/spdk/trace_parser.h 00:03:14.804 TEST_HEADER include/spdk/tree.h 00:03:14.804 TEST_HEADER include/spdk/ublk.h 00:03:14.804 TEST_HEADER include/spdk/util.h 00:03:14.804 TEST_HEADER include/spdk/uuid.h 00:03:14.804 TEST_HEADER include/spdk/version.h 00:03:14.804 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.804 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.804 TEST_HEADER include/spdk/vhost.h 00:03:14.804 TEST_HEADER include/spdk/vmd.h 00:03:14.804 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.804 TEST_HEADER include/spdk/xor.h 00:03:14.804 TEST_HEADER include/spdk/zipf.h 00:03:14.804 CXX test/cpp_headers/accel.o 00:03:14.804 CC examples/idxd/perf/perf.o 00:03:14.804 LINK event_perf 00:03:14.804 LINK vtophys 00:03:14.804 LINK vhost 00:03:14.804 LINK spdk_top 00:03:14.804 CC test/app/histogram_perf/histogram_perf.o 00:03:14.804 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.065 CXX test/cpp_headers/accel_module.o 00:03:15.065 CXX test/cpp_headers/assert.o 00:03:15.065 LINK histogram_perf 00:03:15.065 CC test/event/reactor/reactor.o 00:03:15.065 CC test/app/jsoncat/jsoncat.o 00:03:15.065 CC test/rpc_client/rpc_client_test.o 00:03:15.065 CXX test/cpp_headers/barrier.o 00:03:15.065 LINK idxd_perf 00:03:15.326 LINK reactor 00:03:15.326 LINK jsoncat 00:03:15.326 LINK mem_callbacks 00:03:15.326 CC test/accel/dif/dif.o 00:03:15.326 CXX test/cpp_headers/base64.o 00:03:15.326 LINK nvme_fuzz 00:03:15.326 LINK spdk_nvme 00:03:15.326 LINK rpc_client_test 00:03:15.326 CC test/blobfs/mkfs/mkfs.o 00:03:15.326 CC test/event/reactor_perf/reactor_perf.o 00:03:15.326 CXX test/cpp_headers/bdev.o 00:03:15.326 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:15.326 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:15.587 CC app/fio/bdev/fio_plugin.o 00:03:15.587 CC examples/accel/perf/accel_perf.o 00:03:15.587 CC test/env/memory/memory_ut.o 00:03:15.587 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.587 LINK mkfs 00:03:15.587 LINK reactor_perf 00:03:15.587 CXX test/cpp_headers/bdev_module.o 00:03:15.587 LINK env_dpdk_post_init 00:03:15.587 CXX test/cpp_headers/bdev_zone.o 00:03:15.587 LINK hello_fsdev 00:03:15.847 CXX test/cpp_headers/bit_array.o 00:03:15.847 CC test/event/app_repeat/app_repeat.o 00:03:15.847 CXX test/cpp_headers/bit_pool.o 00:03:15.847 LINK app_repeat 00:03:15.847 CC test/env/pci/pci_ut.o 00:03:15.847 CC test/lvol/esnap/esnap.o 00:03:16.109 LINK spdk_bdev 00:03:16.109 CC test/nvme/aer/aer.o 00:03:16.109 LINK accel_perf 00:03:16.109 CXX test/cpp_headers/blob_bdev.o 00:03:16.109 LINK dif 00:03:16.109 CC test/event/scheduler/scheduler.o 00:03:16.109 CC test/app/stub/stub.o 00:03:16.109 CXX test/cpp_headers/blobfs_bdev.o 00:03:16.370 LINK aer 00:03:16.370 CC examples/nvme/hello_world/hello_world.o 00:03:16.370 CC examples/blob/hello_world/hello_blob.o 00:03:16.370 LINK scheduler 00:03:16.370 CXX test/cpp_headers/blobfs.o 00:03:16.370 LINK stub 00:03:16.370 LINK pci_ut 00:03:16.370 CC test/nvme/reset/reset.o 00:03:16.370 CXX test/cpp_headers/blob.o 00:03:16.632 LINK hello_world 00:03:16.632 CC examples/nvme/reconnect/reconnect.o 00:03:16.632 LINK hello_blob 00:03:16.632 CC test/nvme/sgl/sgl.o 00:03:16.632 LINK memory_ut 00:03:16.632 CC test/nvme/e2edp/nvme_dp.o 00:03:16.632 CXX test/cpp_headers/conf.o 00:03:16.632 CC test/nvme/overhead/overhead.o 00:03:16.632 LINK reset 00:03:16.632 CXX test/cpp_headers/config.o 00:03:16.892 CXX test/cpp_headers/cpuset.o 00:03:16.892 CC examples/blob/cli/blobcli.o 00:03:16.892 LINK reconnect 00:03:16.892 LINK sgl 00:03:16.892 CC test/nvme/err_injection/err_injection.o 00:03:16.892 LINK nvme_dp 00:03:16.892 CXX test/cpp_headers/crc16.o 00:03:16.892 LINK overhead 00:03:16.892 CC test/nvme/startup/startup.o 00:03:16.892 CXX test/cpp_headers/crc32.o 00:03:16.892 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.892 LINK err_injection 00:03:16.892 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.153 LINK iscsi_fuzz 00:03:17.153 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:17.153 CXX test/cpp_headers/crc64.o 00:03:17.153 LINK startup 00:03:17.153 CC examples/nvme/arbitration/arbitration.o 00:03:17.153 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:17.153 CC test/bdev/bdevio/bdevio.o 00:03:17.153 LINK hello_bdev 00:03:17.153 CXX test/cpp_headers/dif.o 00:03:17.153 LINK blobcli 00:03:17.415 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.415 CC test/nvme/reserve/reserve.o 00:03:17.415 CXX test/cpp_headers/dma.o 00:03:17.415 CXX test/cpp_headers/endian.o 00:03:17.415 LINK arbitration 00:03:17.415 CC examples/nvme/hotplug/hotplug.o 00:03:17.675 LINK nvme_manage 00:03:17.675 LINK vhost_fuzz 00:03:17.675 CXX test/cpp_headers/env_dpdk.o 00:03:17.675 CXX test/cpp_headers/env.o 00:03:17.675 LINK reserve 00:03:17.675 LINK bdevio 00:03:17.675 CXX test/cpp_headers/event.o 00:03:17.675 CXX test/cpp_headers/fd_group.o 00:03:17.675 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:17.675 CXX test/cpp_headers/fd.o 00:03:17.675 CC test/nvme/simple_copy/simple_copy.o 00:03:17.675 LINK hotplug 00:03:17.675 CC examples/nvme/abort/abort.o 00:03:17.675 CXX test/cpp_headers/file.o 00:03:17.936 CXX test/cpp_headers/fsdev.o 00:03:17.936 CXX test/cpp_headers/fsdev_module.o 00:03:17.936 CXX test/cpp_headers/ftl.o 00:03:17.936 LINK cmb_copy 00:03:17.936 CXX test/cpp_headers/fuse_dispatcher.o 00:03:17.936 CXX test/cpp_headers/gpt_spec.o 00:03:17.936 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:17.936 CXX test/cpp_headers/hexlify.o 00:03:17.936 LINK simple_copy 00:03:18.195 CXX test/cpp_headers/histogram_data.o 00:03:18.195 CC test/nvme/connect_stress/connect_stress.o 00:03:18.195 CC test/nvme/boot_partition/boot_partition.o 00:03:18.195 LINK pmr_persistence 00:03:18.195 CXX test/cpp_headers/idxd.o 00:03:18.195 CC test/nvme/compliance/nvme_compliance.o 00:03:18.195 LINK abort 00:03:18.195 CC test/nvme/fused_ordering/fused_ordering.o 00:03:18.195 LINK bdevperf 00:03:18.195 CXX test/cpp_headers/idxd_spec.o 00:03:18.195 LINK connect_stress 00:03:18.195 CXX test/cpp_headers/init.o 00:03:18.195 LINK boot_partition 00:03:18.453 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:18.453 CC test/nvme/fdp/fdp.o 00:03:18.453 LINK fused_ordering 00:03:18.453 CXX test/cpp_headers/ioat.o 00:03:18.453 CXX test/cpp_headers/ioat_spec.o 00:03:18.453 CC test/nvme/cuse/cuse.o 00:03:18.453 LINK nvme_compliance 00:03:18.453 CXX test/cpp_headers/iscsi_spec.o 00:03:18.453 CXX test/cpp_headers/json.o 00:03:18.453 CXX test/cpp_headers/jsonrpc.o 00:03:18.453 CC examples/nvmf/nvmf/nvmf.o 00:03:18.453 CXX test/cpp_headers/keyring.o 00:03:18.453 LINK doorbell_aers 00:03:18.453 CXX test/cpp_headers/keyring_module.o 00:03:18.453 CXX test/cpp_headers/likely.o 00:03:18.710 LINK fdp 00:03:18.710 CXX test/cpp_headers/log.o 00:03:18.710 CXX test/cpp_headers/lvol.o 00:03:18.710 CXX test/cpp_headers/md5.o 00:03:18.710 CXX test/cpp_headers/memory.o 00:03:18.710 CXX test/cpp_headers/mmio.o 00:03:18.710 CXX test/cpp_headers/nbd.o 00:03:18.710 CXX test/cpp_headers/net.o 00:03:18.710 CXX test/cpp_headers/notify.o 00:03:18.710 CXX test/cpp_headers/nvme.o 00:03:18.710 CXX test/cpp_headers/nvme_intel.o 00:03:18.710 LINK nvmf 00:03:18.710 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.710 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.969 CXX test/cpp_headers/nvme_spec.o 00:03:18.969 CXX test/cpp_headers/nvme_zns.o 00:03:18.969 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.969 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.969 CXX test/cpp_headers/nvmf.o 00:03:18.969 CXX test/cpp_headers/nvmf_spec.o 00:03:18.969 CXX test/cpp_headers/nvmf_transport.o 00:03:18.969 CXX test/cpp_headers/opal.o 00:03:18.969 CXX test/cpp_headers/opal_spec.o 00:03:18.969 CXX test/cpp_headers/pci_ids.o 00:03:18.969 CXX test/cpp_headers/pipe.o 00:03:18.969 CXX test/cpp_headers/queue.o 00:03:18.969 CXX test/cpp_headers/reduce.o 00:03:18.969 CXX test/cpp_headers/rpc.o 00:03:18.969 CXX test/cpp_headers/scheduler.o 00:03:18.969 CXX test/cpp_headers/scsi.o 00:03:19.228 CXX test/cpp_headers/scsi_spec.o 00:03:19.228 CXX test/cpp_headers/sock.o 00:03:19.228 CXX test/cpp_headers/string.o 00:03:19.228 CXX test/cpp_headers/stdinc.o 00:03:19.228 CXX test/cpp_headers/thread.o 00:03:19.228 CXX test/cpp_headers/trace.o 00:03:19.228 CXX test/cpp_headers/trace_parser.o 00:03:19.228 CXX test/cpp_headers/tree.o 00:03:19.228 CXX test/cpp_headers/ublk.o 00:03:19.228 CXX test/cpp_headers/util.o 00:03:19.228 CXX test/cpp_headers/uuid.o 00:03:19.228 CXX test/cpp_headers/version.o 00:03:19.228 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.228 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.228 CXX test/cpp_headers/vhost.o 00:03:19.228 CXX test/cpp_headers/vmd.o 00:03:19.228 CXX test/cpp_headers/xor.o 00:03:19.228 CXX test/cpp_headers/zipf.o 00:03:19.801 LINK cuse 00:03:20.769 LINK esnap 00:03:21.030 00:03:21.030 real 1m5.630s 00:03:21.030 user 6m10.661s 00:03:21.030 sys 1m3.450s 00:03:21.030 ************************************ 00:03:21.030 END TEST make 00:03:21.030 ************************************ 00:03:21.030 13:16:09 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:21.030 13:16:09 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.030 13:16:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.030 13:16:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.030 13:16:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.030 13:16:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.030 13:16:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.030 13:16:09 -- pm/common@44 -- $ pid=5075 00:03:21.030 13:16:09 -- pm/common@50 -- $ kill -TERM 5075 00:03:21.030 13:16:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.030 13:16:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.030 13:16:09 -- pm/common@44 -- $ pid=5076 00:03:21.030 13:16:09 -- pm/common@50 -- $ kill -TERM 5076 00:03:21.030 13:16:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:21.030 13:16:09 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:21.030 13:16:09 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:21.030 13:16:09 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:21.030 13:16:09 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:21.030 13:16:09 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:21.030 13:16:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:21.030 13:16:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:21.030 13:16:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:21.030 13:16:09 -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.030 13:16:09 -- scripts/common.sh@336 -- # read -ra ver1 00:03:21.030 13:16:09 -- scripts/common.sh@337 -- # IFS=.-: 00:03:21.030 13:16:09 -- scripts/common.sh@337 -- # read -ra ver2 00:03:21.030 13:16:09 -- scripts/common.sh@338 -- # local 'op=<' 00:03:21.030 13:16:09 -- scripts/common.sh@340 -- # ver1_l=2 00:03:21.030 13:16:09 -- scripts/common.sh@341 -- # ver2_l=1 00:03:21.030 13:16:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:21.030 13:16:09 -- scripts/common.sh@344 -- # case "$op" in 00:03:21.030 13:16:09 -- scripts/common.sh@345 -- # : 1 00:03:21.030 13:16:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:21.030 13:16:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.030 13:16:09 -- scripts/common.sh@365 -- # decimal 1 00:03:21.030 13:16:09 -- scripts/common.sh@353 -- # local d=1 00:03:21.030 13:16:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.030 13:16:09 -- scripts/common.sh@355 -- # echo 1 00:03:21.030 13:16:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:21.030 13:16:09 -- scripts/common.sh@366 -- # decimal 2 00:03:21.030 13:16:09 -- scripts/common.sh@353 -- # local d=2 00:03:21.030 13:16:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.030 13:16:09 -- scripts/common.sh@355 -- # echo 2 00:03:21.030 13:16:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:21.030 13:16:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:21.030 13:16:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:21.030 13:16:09 -- scripts/common.sh@368 -- # return 0 00:03:21.030 13:16:09 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.030 13:16:09 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:21.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.030 --rc genhtml_branch_coverage=1 00:03:21.030 --rc genhtml_function_coverage=1 00:03:21.030 --rc genhtml_legend=1 00:03:21.030 --rc geninfo_all_blocks=1 00:03:21.030 --rc geninfo_unexecuted_blocks=1 00:03:21.030 00:03:21.030 ' 00:03:21.030 13:16:09 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:21.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.030 --rc genhtml_branch_coverage=1 00:03:21.030 --rc genhtml_function_coverage=1 00:03:21.030 --rc genhtml_legend=1 00:03:21.030 --rc geninfo_all_blocks=1 00:03:21.030 --rc geninfo_unexecuted_blocks=1 00:03:21.030 00:03:21.030 ' 00:03:21.030 13:16:09 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:21.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.030 --rc genhtml_branch_coverage=1 00:03:21.030 --rc genhtml_function_coverage=1 00:03:21.030 --rc genhtml_legend=1 00:03:21.030 --rc geninfo_all_blocks=1 00:03:21.031 --rc geninfo_unexecuted_blocks=1 00:03:21.031 00:03:21.031 ' 00:03:21.031 13:16:09 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:21.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.031 --rc genhtml_branch_coverage=1 00:03:21.031 --rc genhtml_function_coverage=1 00:03:21.031 --rc genhtml_legend=1 00:03:21.031 --rc geninfo_all_blocks=1 00:03:21.031 --rc geninfo_unexecuted_blocks=1 00:03:21.031 00:03:21.031 ' 00:03:21.031 13:16:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:21.031 13:16:09 -- nvmf/common.sh@7 -- # uname -s 00:03:21.031 13:16:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.031 13:16:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.031 13:16:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.031 13:16:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.031 13:16:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.031 13:16:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.031 13:16:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.031 13:16:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.031 13:16:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.031 13:16:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.031 13:16:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f17a8710-d966-4f0f-b8ea-4a74bc002ec3 00:03:21.031 13:16:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=f17a8710-d966-4f0f-b8ea-4a74bc002ec3 00:03:21.031 13:16:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.031 13:16:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.031 13:16:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:21.031 13:16:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:21.031 13:16:09 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:21.031 13:16:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:21.031 13:16:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.031 13:16:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.031 13:16:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.031 13:16:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.031 13:16:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.031 13:16:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.031 13:16:09 -- paths/export.sh@5 -- # export PATH 00:03:21.031 13:16:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.031 13:16:09 -- nvmf/common.sh@51 -- # : 0 00:03:21.031 13:16:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:21.031 13:16:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:21.031 13:16:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:21.031 13:16:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.031 13:16:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.031 13:16:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:21.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:21.031 13:16:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:21.031 13:16:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:21.031 13:16:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:21.031 13:16:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.031 13:16:09 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.031 13:16:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.031 13:16:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.031 13:16:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.031 13:16:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.031 13:16:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.031 13:16:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.293 13:16:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.293 13:16:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.293 13:16:09 -- spdk/autotest.sh@48 -- # udevadm_pid=54227 00:03:21.293 13:16:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.293 13:16:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.293 13:16:09 -- pm/common@17 -- # local monitor 00:03:21.293 13:16:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.293 13:16:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.293 13:16:09 -- pm/common@25 -- # sleep 1 00:03:21.293 13:16:09 -- pm/common@21 -- # date +%s 00:03:21.293 13:16:09 -- pm/common@21 -- # date +%s 00:03:21.293 13:16:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732626969 00:03:21.293 13:16:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732626969 00:03:21.293 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732626969_collect-vmstat.pm.log 00:03:21.293 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732626969_collect-cpu-load.pm.log 00:03:22.239 13:16:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.239 13:16:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.239 13:16:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:22.239 13:16:10 -- common/autotest_common.sh@10 -- # set +x 00:03:22.239 13:16:10 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.239 13:16:10 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:22.239 13:16:10 -- common/autotest_common.sh@10 -- # set +x 00:03:22.239 13:16:10 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:22.239 13:16:10 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:22.239 13:16:10 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:22.239 13:16:10 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:22.239 13:16:10 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:22.239 13:16:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.239 13:16:10 -- common/autotest_common.sh@1457 -- # uname 00:03:22.239 13:16:10 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:22.239 13:16:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.239 13:16:10 -- common/autotest_common.sh@1477 -- # uname 00:03:22.239 13:16:10 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:22.239 13:16:10 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:22.239 13:16:10 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:22.239 lcov: LCOV version 1.15 00:03:22.239 13:16:10 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:37.146 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:37.146 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:52.044 13:16:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:52.044 13:16:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:52.044 13:16:39 -- common/autotest_common.sh@10 -- # set +x 00:03:52.044 13:16:39 -- spdk/autotest.sh@78 -- # rm -f 00:03:52.044 13:16:39 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.044 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:52.044 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:52.044 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:52.044 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:52.044 13:16:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:52.044 13:16:40 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:52.044 13:16:40 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:52.044 13:16:40 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:52.044 13:16:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.044 13:16:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:52.044 13:16:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:52.044 13:16:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.044 13:16:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:52.044 13:16:40 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:52.044 13:16:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.044 13:16:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:03:52.044 13:16:40 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:52.044 13:16:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.044 13:16:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:03:52.044 13:16:40 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:03:52.044 13:16:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.044 13:16:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:03:52.044 13:16:40 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:03:52.044 13:16:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.044 13:16:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:03:52.044 13:16:40 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:52.044 13:16:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.044 13:16:40 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:03:52.044 13:16:40 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:03:52.044 13:16:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:52.044 13:16:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.044 13:16:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:52.044 13:16:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.044 13:16:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.044 13:16:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:52.044 13:16:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:52.044 13:16:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:52.306 No valid GPT data, bailing 00:03:52.306 13:16:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.306 13:16:40 -- scripts/common.sh@394 -- # pt= 00:03:52.306 13:16:40 -- scripts/common.sh@395 -- # return 1 00:03:52.306 13:16:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:52.306 1+0 records in 00:03:52.306 1+0 records out 00:03:52.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111523 s, 94.0 MB/s 00:03:52.306 13:16:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.306 13:16:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.306 13:16:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:52.306 13:16:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:52.306 13:16:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:52.306 No valid GPT data, bailing 00:03:52.306 13:16:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:52.306 13:16:40 -- scripts/common.sh@394 -- # pt= 00:03:52.306 13:16:40 -- scripts/common.sh@395 -- # return 1 00:03:52.306 13:16:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:52.306 1+0 records in 00:03:52.306 1+0 records out 00:03:52.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00375483 s, 279 MB/s 00:03:52.306 13:16:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.306 13:16:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.306 13:16:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:52.306 13:16:40 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:52.307 13:16:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:52.307 No valid GPT data, bailing 00:03:52.307 13:16:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:52.307 13:16:40 -- scripts/common.sh@394 -- # pt= 00:03:52.307 13:16:40 -- scripts/common.sh@395 -- # return 1 00:03:52.307 13:16:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:52.307 1+0 records in 00:03:52.307 1+0 records out 00:03:52.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514622 s, 204 MB/s 00:03:52.307 13:16:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.307 13:16:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.307 13:16:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:52.307 13:16:40 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:52.307 13:16:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:52.307 No valid GPT data, bailing 00:03:52.307 13:16:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:52.307 13:16:40 -- scripts/common.sh@394 -- # pt= 00:03:52.307 13:16:40 -- scripts/common.sh@395 -- # return 1 00:03:52.307 13:16:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:52.307 1+0 records in 00:03:52.307 1+0 records out 00:03:52.307 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049418 s, 212 MB/s 00:03:52.307 13:16:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.307 13:16:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.307 13:16:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:52.307 13:16:40 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:52.307 13:16:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:52.566 No valid GPT data, bailing 00:03:52.566 13:16:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:52.566 13:16:40 -- scripts/common.sh@394 -- # pt= 00:03:52.566 13:16:40 -- scripts/common.sh@395 -- # return 1 00:03:52.566 13:16:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:52.566 1+0 records in 00:03:52.566 1+0 records out 00:03:52.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524629 s, 200 MB/s 00:03:52.566 13:16:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.566 13:16:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.566 13:16:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:52.566 13:16:40 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:52.566 13:16:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:52.566 No valid GPT data, bailing 00:03:52.566 13:16:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:52.566 13:16:41 -- scripts/common.sh@394 -- # pt= 00:03:52.566 13:16:41 -- scripts/common.sh@395 -- # return 1 00:03:52.566 13:16:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:52.566 1+0 records in 00:03:52.566 1+0 records out 00:03:52.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416258 s, 252 MB/s 00:03:52.566 13:16:41 -- spdk/autotest.sh@105 -- # sync 00:03:52.566 13:16:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:52.566 13:16:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:52.567 13:16:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:54.479 13:16:42 -- spdk/autotest.sh@111 -- # uname -s 00:03:54.479 13:16:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:54.479 13:16:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:54.479 13:16:42 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:54.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.003 Hugepages 00:03:55.003 node hugesize free / total 00:03:55.003 node0 1048576kB 0 / 0 00:03:55.003 node0 2048kB 0 / 0 00:03:55.003 00:03:55.003 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:55.003 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:55.294 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:55.294 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:55.294 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:55.294 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:55.294 13:16:43 -- spdk/autotest.sh@117 -- # uname -s 00:03:55.294 13:16:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:55.294 13:16:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:55.294 13:16:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.187 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.187 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.187 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.187 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.187 13:16:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:57.129 13:16:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:57.129 13:16:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:57.129 13:16:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:57.389 13:16:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:57.389 13:16:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:57.389 13:16:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:57.389 13:16:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:57.389 13:16:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:57.389 13:16:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:57.389 13:16:45 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:03:57.389 13:16:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:57.389 13:16:45 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.650 Waiting for block devices as requested 00:03:57.650 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:57.912 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:57.912 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:03:57.912 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.196 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:03.196 13:16:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:03.196 13:16:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.196 13:16:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.196 13:16:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:03.196 13:16:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:03.196 13:16:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:03.196 13:16:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:03.196 13:16:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:03.196 13:16:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1543 -- # continue 00:04:03.196 13:16:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:03.196 13:16:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.196 13:16:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.196 13:16:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:03.196 13:16:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:03.196 13:16:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:03.196 13:16:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:03.196 13:16:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:03.196 13:16:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1543 -- # continue 00:04:03.196 13:16:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:03.196 13:16:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:03.196 13:16:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:03.196 13:16:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:03.196 13:16:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:03.196 13:16:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:03.196 13:16:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:03.196 13:16:51 -- common/autotest_common.sh@1543 -- # continue 00:04:03.196 13:16:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:03.196 13:16:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:03.196 13:16:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:03.197 13:16:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:03.197 13:16:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:03.197 13:16:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:03.197 13:16:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:03.197 13:16:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:03.197 13:16:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:03.197 13:16:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:03.197 13:16:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:03.197 13:16:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:03.197 13:16:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:03.197 13:16:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:03.197 13:16:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:03.197 13:16:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:03.197 13:16:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:03.197 13:16:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:03.197 13:16:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:03.197 13:16:51 -- common/autotest_common.sh@1543 -- # continue 00:04:03.197 13:16:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:03.197 13:16:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.197 13:16:51 -- common/autotest_common.sh@10 -- # set +x 00:04:03.197 13:16:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:03.197 13:16:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.197 13:16:51 -- common/autotest_common.sh@10 -- # set +x 00:04:03.197 13:16:51 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.456 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.026 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.026 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.026 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.026 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.285 13:16:52 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:04.285 13:16:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.285 13:16:52 -- common/autotest_common.sh@10 -- # set +x 00:04:04.285 13:16:52 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:04.285 13:16:52 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:04.285 13:16:52 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:04.285 13:16:52 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:04.285 13:16:52 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:04.285 13:16:52 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:04.285 13:16:52 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:04.285 13:16:52 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:04.285 13:16:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:04.285 13:16:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:04.285 13:16:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.285 13:16:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.285 13:16:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:04.285 13:16:52 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:04.285 13:16:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:04.285 13:16:52 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:04.285 13:16:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:04.285 13:16:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:04.285 13:16:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.285 13:16:52 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:04.285 13:16:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:04.285 13:16:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:04.285 13:16:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.285 13:16:52 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:04.285 13:16:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:04.285 13:16:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:04.285 13:16:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.285 13:16:52 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:04.285 13:16:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:04.285 13:16:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:04.285 13:16:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.285 13:16:52 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:04.285 13:16:52 -- common/autotest_common.sh@1572 -- # return 0 00:04:04.285 13:16:52 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:04.285 13:16:52 -- common/autotest_common.sh@1580 -- # return 0 00:04:04.285 13:16:52 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:04.285 13:16:52 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:04.285 13:16:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:04.285 13:16:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:04.285 13:16:52 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:04.285 13:16:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.285 13:16:52 -- common/autotest_common.sh@10 -- # set +x 00:04:04.285 13:16:52 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:04.285 13:16:52 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.285 13:16:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.285 13:16:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.285 13:16:52 -- common/autotest_common.sh@10 -- # set +x 00:04:04.285 ************************************ 00:04:04.285 START TEST env 00:04:04.285 ************************************ 00:04:04.285 13:16:52 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.285 * Looking for test storage... 00:04:04.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:04.285 13:16:52 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.285 13:16:52 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.285 13:16:52 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.544 13:16:52 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.544 13:16:52 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.544 13:16:52 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.544 13:16:52 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.544 13:16:52 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.544 13:16:52 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.544 13:16:52 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.544 13:16:52 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.544 13:16:52 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.544 13:16:52 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.544 13:16:52 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.544 13:16:52 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.544 13:16:52 env -- scripts/common.sh@344 -- # case "$op" in 00:04:04.544 13:16:52 env -- scripts/common.sh@345 -- # : 1 00:04:04.544 13:16:52 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.544 13:16:52 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.544 13:16:52 env -- scripts/common.sh@365 -- # decimal 1 00:04:04.544 13:16:52 env -- scripts/common.sh@353 -- # local d=1 00:04:04.544 13:16:52 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.544 13:16:52 env -- scripts/common.sh@355 -- # echo 1 00:04:04.544 13:16:52 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.544 13:16:52 env -- scripts/common.sh@366 -- # decimal 2 00:04:04.544 13:16:52 env -- scripts/common.sh@353 -- # local d=2 00:04:04.544 13:16:52 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.544 13:16:52 env -- scripts/common.sh@355 -- # echo 2 00:04:04.544 13:16:52 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.544 13:16:52 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.544 13:16:52 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.544 13:16:52 env -- scripts/common.sh@368 -- # return 0 00:04:04.544 13:16:52 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.544 13:16:52 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.544 --rc genhtml_branch_coverage=1 00:04:04.544 --rc genhtml_function_coverage=1 00:04:04.544 --rc genhtml_legend=1 00:04:04.544 --rc geninfo_all_blocks=1 00:04:04.544 --rc geninfo_unexecuted_blocks=1 00:04:04.544 00:04:04.544 ' 00:04:04.544 13:16:52 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.544 --rc genhtml_branch_coverage=1 00:04:04.544 --rc genhtml_function_coverage=1 00:04:04.544 --rc genhtml_legend=1 00:04:04.544 --rc geninfo_all_blocks=1 00:04:04.544 --rc geninfo_unexecuted_blocks=1 00:04:04.544 00:04:04.544 ' 00:04:04.544 13:16:52 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.544 --rc genhtml_branch_coverage=1 00:04:04.544 --rc genhtml_function_coverage=1 00:04:04.544 --rc genhtml_legend=1 00:04:04.544 --rc geninfo_all_blocks=1 00:04:04.544 --rc geninfo_unexecuted_blocks=1 00:04:04.544 00:04:04.544 ' 00:04:04.544 13:16:52 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.544 --rc genhtml_branch_coverage=1 00:04:04.544 --rc genhtml_function_coverage=1 00:04:04.544 --rc genhtml_legend=1 00:04:04.544 --rc geninfo_all_blocks=1 00:04:04.544 --rc geninfo_unexecuted_blocks=1 00:04:04.544 00:04:04.544 ' 00:04:04.544 13:16:52 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.544 13:16:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.544 13:16:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.544 13:16:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.544 ************************************ 00:04:04.544 START TEST env_memory 00:04:04.544 ************************************ 00:04:04.544 13:16:52 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.544 00:04:04.544 00:04:04.544 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.544 http://cunit.sourceforge.net/ 00:04:04.544 00:04:04.544 00:04:04.544 Suite: memory 00:04:04.545 Test: alloc and free memory map ...[2024-11-26 13:16:52.931887] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.545 passed 00:04:04.545 Test: mem map translation ...[2024-11-26 13:16:52.977749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.545 [2024-11-26 13:16:52.977948] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.545 [2024-11-26 13:16:52.978018] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.545 [2024-11-26 13:16:52.978035] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.545 passed 00:04:04.545 Test: mem map registration ...[2024-11-26 13:16:53.047774] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:04.545 [2024-11-26 13:16:53.047838] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:04.545 passed 00:04:04.806 Test: mem map adjacent registrations ...passed 00:04:04.806 00:04:04.806 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.806 suites 1 1 n/a 0 0 00:04:04.806 tests 4 4 4 0 0 00:04:04.806 asserts 152 152 152 0 n/a 00:04:04.806 00:04:04.806 Elapsed time = 0.254 seconds 00:04:04.806 00:04:04.806 real 0m0.289s 00:04:04.806 user 0m0.265s 00:04:04.806 sys 0m0.016s 00:04:04.806 13:16:53 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.806 ************************************ 00:04:04.806 END TEST env_memory 00:04:04.806 ************************************ 00:04:04.806 13:16:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.806 13:16:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.806 13:16:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.806 13:16:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.806 13:16:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.806 ************************************ 00:04:04.806 START TEST env_vtophys 00:04:04.806 ************************************ 00:04:04.806 13:16:53 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.806 EAL: lib.eal log level changed from notice to debug 00:04:04.806 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.806 EAL: Detected lcore 1 as core 0 on socket 0 00:04:04.806 EAL: Detected lcore 2 as core 0 on socket 0 00:04:04.806 EAL: Detected lcore 3 as core 0 on socket 0 00:04:04.806 EAL: Detected lcore 4 as core 0 on socket 0 00:04:04.806 EAL: Detected lcore 5 as core 0 on socket 0 00:04:04.806 EAL: Detected lcore 6 as core 0 on socket 0 00:04:04.806 EAL: Detected lcore 7 as core 0 on socket 0 00:04:04.806 EAL: Detected lcore 8 as core 0 on socket 0 00:04:04.806 EAL: Detected lcore 9 as core 0 on socket 0 00:04:04.806 EAL: Maximum logical cores by configuration: 128 00:04:04.806 EAL: Detected CPU lcores: 10 00:04:04.806 EAL: Detected NUMA nodes: 1 00:04:04.806 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:04.806 EAL: Detected shared linkage of DPDK 00:04:04.806 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.806 EAL: Selected IOVA mode 'PA' 00:04:04.806 EAL: Probing VFIO support... 00:04:04.806 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.806 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:04.806 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.806 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.806 EAL: Setting up physically contiguous memory... 00:04:04.806 EAL: Setting maximum number of open files to 524288 00:04:04.806 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.806 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.806 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.806 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.806 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.806 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.806 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.806 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.806 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.806 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.806 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.806 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.806 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.806 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.806 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.806 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.806 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.806 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.806 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.806 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.806 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.806 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.806 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.806 EAL: Hugepages will be freed exactly as allocated. 00:04:04.806 EAL: No shared files mode enabled, IPC is disabled 00:04:04.806 EAL: No shared files mode enabled, IPC is disabled 00:04:05.067 EAL: TSC frequency is ~2600000 KHz 00:04:05.067 EAL: Main lcore 0 is ready (tid=7fa571336a40;cpuset=[0]) 00:04:05.067 EAL: Trying to obtain current memory policy. 00:04:05.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.067 EAL: Restoring previous memory policy: 0 00:04:05.067 EAL: request: mp_malloc_sync 00:04:05.067 EAL: No shared files mode enabled, IPC is disabled 00:04:05.067 EAL: Heap on socket 0 was expanded by 2MB 00:04:05.067 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.067 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:05.067 EAL: Mem event callback 'spdk:(nil)' registered 00:04:05.067 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:05.067 00:04:05.067 00:04:05.067 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.067 http://cunit.sourceforge.net/ 00:04:05.067 00:04:05.067 00:04:05.067 Suite: components_suite 00:04:05.329 Test: vtophys_malloc_test ...passed 00:04:05.329 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:05.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.329 EAL: Restoring previous memory policy: 4 00:04:05.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.329 EAL: request: mp_malloc_sync 00:04:05.329 EAL: No shared files mode enabled, IPC is disabled 00:04:05.329 EAL: Heap on socket 0 was expanded by 4MB 00:04:05.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.329 EAL: request: mp_malloc_sync 00:04:05.329 EAL: No shared files mode enabled, IPC is disabled 00:04:05.329 EAL: Heap on socket 0 was shrunk by 4MB 00:04:05.329 EAL: Trying to obtain current memory policy. 00:04:05.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.329 EAL: Restoring previous memory policy: 4 00:04:05.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.329 EAL: request: mp_malloc_sync 00:04:05.329 EAL: No shared files mode enabled, IPC is disabled 00:04:05.329 EAL: Heap on socket 0 was expanded by 6MB 00:04:05.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.329 EAL: request: mp_malloc_sync 00:04:05.329 EAL: No shared files mode enabled, IPC is disabled 00:04:05.329 EAL: Heap on socket 0 was shrunk by 6MB 00:04:05.329 EAL: Trying to obtain current memory policy. 00:04:05.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.329 EAL: Restoring previous memory policy: 4 00:04:05.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.329 EAL: request: mp_malloc_sync 00:04:05.329 EAL: No shared files mode enabled, IPC is disabled 00:04:05.329 EAL: Heap on socket 0 was expanded by 10MB 00:04:05.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.329 EAL: request: mp_malloc_sync 00:04:05.329 EAL: No shared files mode enabled, IPC is disabled 00:04:05.329 EAL: Heap on socket 0 was shrunk by 10MB 00:04:05.329 EAL: Trying to obtain current memory policy. 00:04:05.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.329 EAL: Restoring previous memory policy: 4 00:04:05.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.329 EAL: request: mp_malloc_sync 00:04:05.329 EAL: No shared files mode enabled, IPC is disabled 00:04:05.329 EAL: Heap on socket 0 was expanded by 18MB 00:04:05.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.329 EAL: request: mp_malloc_sync 00:04:05.329 EAL: No shared files mode enabled, IPC is disabled 00:04:05.329 EAL: Heap on socket 0 was shrunk by 18MB 00:04:05.329 EAL: Trying to obtain current memory policy. 00:04:05.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.329 EAL: Restoring previous memory policy: 4 00:04:05.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.329 EAL: request: mp_malloc_sync 00:04:05.329 EAL: No shared files mode enabled, IPC is disabled 00:04:05.329 EAL: Heap on socket 0 was expanded by 34MB 00:04:05.329 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.329 EAL: request: mp_malloc_sync 00:04:05.329 EAL: No shared files mode enabled, IPC is disabled 00:04:05.329 EAL: Heap on socket 0 was shrunk by 34MB 00:04:05.329 EAL: Trying to obtain current memory policy. 00:04:05.329 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.590 EAL: Restoring previous memory policy: 4 00:04:05.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.590 EAL: request: mp_malloc_sync 00:04:05.590 EAL: No shared files mode enabled, IPC is disabled 00:04:05.590 EAL: Heap on socket 0 was expanded by 66MB 00:04:05.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.590 EAL: request: mp_malloc_sync 00:04:05.590 EAL: No shared files mode enabled, IPC is disabled 00:04:05.590 EAL: Heap on socket 0 was shrunk by 66MB 00:04:05.590 EAL: Trying to obtain current memory policy. 00:04:05.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.590 EAL: Restoring previous memory policy: 4 00:04:05.590 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.590 EAL: request: mp_malloc_sync 00:04:05.590 EAL: No shared files mode enabled, IPC is disabled 00:04:05.590 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.851 EAL: request: mp_malloc_sync 00:04:05.851 EAL: No shared files mode enabled, IPC is disabled 00:04:05.851 EAL: Heap on socket 0 was shrunk by 130MB 00:04:05.851 EAL: Trying to obtain current memory policy. 00:04:05.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.112 EAL: Restoring previous memory policy: 4 00:04:06.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.112 EAL: request: mp_malloc_sync 00:04:06.112 EAL: No shared files mode enabled, IPC is disabled 00:04:06.112 EAL: Heap on socket 0 was expanded by 258MB 00:04:06.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.373 EAL: request: mp_malloc_sync 00:04:06.373 EAL: No shared files mode enabled, IPC is disabled 00:04:06.373 EAL: Heap on socket 0 was shrunk by 258MB 00:04:06.373 EAL: Trying to obtain current memory policy. 00:04:06.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.635 EAL: Restoring previous memory policy: 4 00:04:06.635 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.635 EAL: request: mp_malloc_sync 00:04:06.635 EAL: No shared files mode enabled, IPC is disabled 00:04:06.635 EAL: Heap on socket 0 was expanded by 514MB 00:04:06.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.157 EAL: request: mp_malloc_sync 00:04:07.157 EAL: No shared files mode enabled, IPC is disabled 00:04:07.157 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.417 EAL: Trying to obtain current memory policy. 00:04:07.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.678 EAL: Restoring previous memory policy: 4 00:04:07.678 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.678 EAL: request: mp_malloc_sync 00:04:07.678 EAL: No shared files mode enabled, IPC is disabled 00:04:07.678 EAL: Heap on socket 0 was expanded by 1026MB 00:04:08.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.509 EAL: request: mp_malloc_sync 00:04:08.509 EAL: No shared files mode enabled, IPC is disabled 00:04:08.509 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:09.452 passed 00:04:09.452 00:04:09.452 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.452 suites 1 1 n/a 0 0 00:04:09.452 tests 2 2 2 0 0 00:04:09.452 asserts 5803 5803 5803 0 n/a 00:04:09.452 00:04:09.452 Elapsed time = 4.239 seconds 00:04:09.452 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.452 EAL: request: mp_malloc_sync 00:04:09.452 EAL: No shared files mode enabled, IPC is disabled 00:04:09.452 EAL: Heap on socket 0 was shrunk by 2MB 00:04:09.452 EAL: No shared files mode enabled, IPC is disabled 00:04:09.452 EAL: No shared files mode enabled, IPC is disabled 00:04:09.452 EAL: No shared files mode enabled, IPC is disabled 00:04:09.452 ************************************ 00:04:09.452 END TEST env_vtophys 00:04:09.452 ************************************ 00:04:09.452 00:04:09.452 real 0m4.506s 00:04:09.452 user 0m3.714s 00:04:09.452 sys 0m0.646s 00:04:09.452 13:16:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.453 13:16:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:09.453 13:16:57 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.453 13:16:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.453 13:16:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.453 13:16:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.453 ************************************ 00:04:09.453 START TEST env_pci 00:04:09.453 ************************************ 00:04:09.453 13:16:57 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.453 00:04:09.453 00:04:09.453 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.453 http://cunit.sourceforge.net/ 00:04:09.453 00:04:09.453 00:04:09.453 Suite: pci 00:04:09.453 Test: pci_hook ...[2024-11-26 13:16:57.789484] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56970 has claimed it 00:04:09.453 passed 00:04:09.453 00:04:09.453 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.453 suites 1 1 n/a 0 0 00:04:09.453 tests 1 1 1 0 0 00:04:09.453 asserts 25 25 25 0 n/a 00:04:09.453 00:04:09.453 Elapsed time = 0.006 seconds 00:04:09.453 EAL: Cannot find device (10000:00:01.0) 00:04:09.453 EAL: Failed to attach device on primary process 00:04:09.453 ************************************ 00:04:09.453 END TEST env_pci 00:04:09.453 ************************************ 00:04:09.453 00:04:09.453 real 0m0.064s 00:04:09.453 user 0m0.026s 00:04:09.453 sys 0m0.036s 00:04:09.453 13:16:57 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.453 13:16:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:09.453 13:16:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:09.453 13:16:57 env -- env/env.sh@15 -- # uname 00:04:09.453 13:16:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:09.453 13:16:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:09.453 13:16:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.453 13:16:57 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:09.453 13:16:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.453 13:16:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.453 ************************************ 00:04:09.453 START TEST env_dpdk_post_init 00:04:09.453 ************************************ 00:04:09.453 13:16:57 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.453 EAL: Detected CPU lcores: 10 00:04:09.453 EAL: Detected NUMA nodes: 1 00:04:09.453 EAL: Detected shared linkage of DPDK 00:04:09.453 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.453 EAL: Selected IOVA mode 'PA' 00:04:09.714 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:09.714 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:09.714 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:09.714 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:09.714 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:09.714 Starting DPDK initialization... 00:04:09.714 Starting SPDK post initialization... 00:04:09.714 SPDK NVMe probe 00:04:09.714 Attaching to 0000:00:10.0 00:04:09.714 Attaching to 0000:00:11.0 00:04:09.714 Attaching to 0000:00:12.0 00:04:09.714 Attaching to 0000:00:13.0 00:04:09.714 Attached to 0000:00:10.0 00:04:09.714 Attached to 0000:00:11.0 00:04:09.714 Attached to 0000:00:13.0 00:04:09.714 Attached to 0000:00:12.0 00:04:09.714 Cleaning up... 00:04:09.714 00:04:09.714 real 0m0.235s 00:04:09.714 user 0m0.076s 00:04:09.714 sys 0m0.060s 00:04:09.714 13:16:58 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.714 ************************************ 00:04:09.714 END TEST env_dpdk_post_init 00:04:09.714 ************************************ 00:04:09.714 13:16:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:09.714 13:16:58 env -- env/env.sh@26 -- # uname 00:04:09.714 13:16:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:09.714 13:16:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:09.714 13:16:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.714 13:16:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.714 13:16:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.714 ************************************ 00:04:09.714 START TEST env_mem_callbacks 00:04:09.714 ************************************ 00:04:09.714 13:16:58 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:09.714 EAL: Detected CPU lcores: 10 00:04:09.714 EAL: Detected NUMA nodes: 1 00:04:09.714 EAL: Detected shared linkage of DPDK 00:04:09.714 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.714 EAL: Selected IOVA mode 'PA' 00:04:09.975 00:04:09.975 00:04:09.975 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.975 http://cunit.sourceforge.net/ 00:04:09.975 00:04:09.975 00:04:09.975 Suite: memory 00:04:09.975 Test: test ... 00:04:09.975 register 0x200000200000 2097152 00:04:09.975 malloc 3145728 00:04:09.975 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:09.975 register 0x200000400000 4194304 00:04:09.975 buf 0x2000004fffc0 len 3145728 PASSED 00:04:09.975 malloc 64 00:04:09.975 buf 0x2000004ffec0 len 64 PASSED 00:04:09.975 malloc 4194304 00:04:09.975 register 0x200000800000 6291456 00:04:09.975 buf 0x2000009fffc0 len 4194304 PASSED 00:04:09.975 free 0x2000004fffc0 3145728 00:04:09.975 free 0x2000004ffec0 64 00:04:09.975 unregister 0x200000400000 4194304 PASSED 00:04:09.975 free 0x2000009fffc0 4194304 00:04:09.975 unregister 0x200000800000 6291456 PASSED 00:04:09.975 malloc 8388608 00:04:09.975 register 0x200000400000 10485760 00:04:09.975 buf 0x2000005fffc0 len 8388608 PASSED 00:04:09.975 free 0x2000005fffc0 8388608 00:04:09.975 unregister 0x200000400000 10485760 PASSED 00:04:09.975 passed 00:04:09.975 00:04:09.975 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.975 suites 1 1 n/a 0 0 00:04:09.975 tests 1 1 1 0 0 00:04:09.975 asserts 15 15 15 0 n/a 00:04:09.975 00:04:09.975 Elapsed time = 0.041 seconds 00:04:09.975 00:04:09.975 real 0m0.207s 00:04:09.975 user 0m0.058s 00:04:09.975 sys 0m0.047s 00:04:09.975 13:16:58 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.975 13:16:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:09.975 ************************************ 00:04:09.975 END TEST env_mem_callbacks 00:04:09.975 ************************************ 00:04:09.975 00:04:09.975 real 0m5.670s 00:04:09.975 user 0m4.283s 00:04:09.975 sys 0m1.006s 00:04:09.975 13:16:58 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.975 13:16:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.975 ************************************ 00:04:09.975 END TEST env 00:04:09.975 ************************************ 00:04:09.975 13:16:58 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:09.975 13:16:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.975 13:16:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.975 13:16:58 -- common/autotest_common.sh@10 -- # set +x 00:04:09.975 ************************************ 00:04:09.975 START TEST rpc 00:04:09.975 ************************************ 00:04:09.975 13:16:58 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:09.975 * Looking for test storage... 00:04:09.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:09.975 13:16:58 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:09.975 13:16:58 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:09.975 13:16:58 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:10.237 13:16:58 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.237 13:16:58 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.237 13:16:58 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.237 13:16:58 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.237 13:16:58 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.237 13:16:58 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.237 13:16:58 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.237 13:16:58 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.237 13:16:58 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.237 13:16:58 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.237 13:16:58 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.237 13:16:58 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.237 13:16:58 rpc -- scripts/common.sh@345 -- # : 1 00:04:10.237 13:16:58 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.237 13:16:58 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.237 13:16:58 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.237 13:16:58 rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.237 13:16:58 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.237 13:16:58 rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.237 13:16:58 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.237 13:16:58 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.237 13:16:58 rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.237 13:16:58 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.237 13:16:58 rpc -- scripts/common.sh@355 -- # echo 2 00:04:10.237 13:16:58 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.237 13:16:58 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.237 13:16:58 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.237 13:16:58 rpc -- scripts/common.sh@368 -- # return 0 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.237 --rc genhtml_branch_coverage=1 00:04:10.237 --rc genhtml_function_coverage=1 00:04:10.237 --rc genhtml_legend=1 00:04:10.237 --rc geninfo_all_blocks=1 00:04:10.237 --rc geninfo_unexecuted_blocks=1 00:04:10.237 00:04:10.237 ' 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.237 --rc genhtml_branch_coverage=1 00:04:10.237 --rc genhtml_function_coverage=1 00:04:10.237 --rc genhtml_legend=1 00:04:10.237 --rc geninfo_all_blocks=1 00:04:10.237 --rc geninfo_unexecuted_blocks=1 00:04:10.237 00:04:10.237 ' 00:04:10.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.237 --rc genhtml_branch_coverage=1 00:04:10.237 --rc genhtml_function_coverage=1 00:04:10.237 --rc genhtml_legend=1 00:04:10.237 --rc geninfo_all_blocks=1 00:04:10.237 --rc geninfo_unexecuted_blocks=1 00:04:10.237 00:04:10.237 ' 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.237 --rc genhtml_branch_coverage=1 00:04:10.237 --rc genhtml_function_coverage=1 00:04:10.237 --rc genhtml_legend=1 00:04:10.237 --rc geninfo_all_blocks=1 00:04:10.237 --rc geninfo_unexecuted_blocks=1 00:04:10.237 00:04:10.237 ' 00:04:10.237 13:16:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57091 00:04:10.237 13:16:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.237 13:16:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57091 00:04:10.237 13:16:58 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@835 -- # '[' -z 57091 ']' 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.237 13:16:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.237 [2024-11-26 13:16:58.659212] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:10.237 [2024-11-26 13:16:58.659469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57091 ] 00:04:10.498 [2024-11-26 13:16:58.813336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.498 [2024-11-26 13:16:58.894400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:10.498 [2024-11-26 13:16:58.894601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57091' to capture a snapshot of events at runtime. 00:04:10.498 [2024-11-26 13:16:58.894724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:10.498 [2024-11-26 13:16:58.894758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:10.498 [2024-11-26 13:16:58.894773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57091 for offline analysis/debug. 00:04:10.498 [2024-11-26 13:16:58.895486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.069 13:16:59 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.069 13:16:59 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:11.069 13:16:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.069 13:16:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.069 13:16:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:11.069 13:16:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:11.069 13:16:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.069 13:16:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.069 13:16:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.069 ************************************ 00:04:11.069 START TEST rpc_integrity 00:04:11.069 ************************************ 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.069 { 00:04:11.069 "name": "Malloc0", 00:04:11.069 "aliases": [ 00:04:11.069 "73e19799-a1a0-43c1-a777-69b61242c9e4" 00:04:11.069 ], 00:04:11.069 "product_name": "Malloc disk", 00:04:11.069 "block_size": 512, 00:04:11.069 "num_blocks": 16384, 00:04:11.069 "uuid": "73e19799-a1a0-43c1-a777-69b61242c9e4", 00:04:11.069 "assigned_rate_limits": { 00:04:11.069 "rw_ios_per_sec": 0, 00:04:11.069 "rw_mbytes_per_sec": 0, 00:04:11.069 "r_mbytes_per_sec": 0, 00:04:11.069 "w_mbytes_per_sec": 0 00:04:11.069 }, 00:04:11.069 "claimed": false, 00:04:11.069 "zoned": false, 00:04:11.069 "supported_io_types": { 00:04:11.069 "read": true, 00:04:11.069 "write": true, 00:04:11.069 "unmap": true, 00:04:11.069 "flush": true, 00:04:11.069 "reset": true, 00:04:11.069 "nvme_admin": false, 00:04:11.069 "nvme_io": false, 00:04:11.069 "nvme_io_md": false, 00:04:11.069 "write_zeroes": true, 00:04:11.069 "zcopy": true, 00:04:11.069 "get_zone_info": false, 00:04:11.069 "zone_management": false, 00:04:11.069 "zone_append": false, 00:04:11.069 "compare": false, 00:04:11.069 "compare_and_write": false, 00:04:11.069 "abort": true, 00:04:11.069 "seek_hole": false, 00:04:11.069 "seek_data": false, 00:04:11.069 "copy": true, 00:04:11.069 "nvme_iov_md": false 00:04:11.069 }, 00:04:11.069 "memory_domains": [ 00:04:11.069 { 00:04:11.069 "dma_device_id": "system", 00:04:11.069 "dma_device_type": 1 00:04:11.069 }, 00:04:11.069 { 00:04:11.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.069 "dma_device_type": 2 00:04:11.069 } 00:04:11.069 ], 00:04:11.069 "driver_specific": {} 00:04:11.069 } 00:04:11.069 ]' 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.069 [2024-11-26 13:16:59.626086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:11.069 [2024-11-26 13:16:59.626151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.069 [2024-11-26 13:16:59.626178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:11.069 [2024-11-26 13:16:59.626191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.069 [2024-11-26 13:16:59.628526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.069 [2024-11-26 13:16:59.628568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.069 Passthru0 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.069 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.069 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.331 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.331 { 00:04:11.331 "name": "Malloc0", 00:04:11.331 "aliases": [ 00:04:11.331 "73e19799-a1a0-43c1-a777-69b61242c9e4" 00:04:11.331 ], 00:04:11.331 "product_name": "Malloc disk", 00:04:11.331 "block_size": 512, 00:04:11.331 "num_blocks": 16384, 00:04:11.331 "uuid": "73e19799-a1a0-43c1-a777-69b61242c9e4", 00:04:11.331 "assigned_rate_limits": { 00:04:11.331 "rw_ios_per_sec": 0, 00:04:11.331 "rw_mbytes_per_sec": 0, 00:04:11.331 "r_mbytes_per_sec": 0, 00:04:11.331 "w_mbytes_per_sec": 0 00:04:11.331 }, 00:04:11.331 "claimed": true, 00:04:11.331 "claim_type": "exclusive_write", 00:04:11.331 "zoned": false, 00:04:11.331 "supported_io_types": { 00:04:11.331 "read": true, 00:04:11.331 "write": true, 00:04:11.331 "unmap": true, 00:04:11.331 "flush": true, 00:04:11.331 "reset": true, 00:04:11.331 "nvme_admin": false, 00:04:11.331 "nvme_io": false, 00:04:11.331 "nvme_io_md": false, 00:04:11.331 "write_zeroes": true, 00:04:11.331 "zcopy": true, 00:04:11.331 "get_zone_info": false, 00:04:11.331 "zone_management": false, 00:04:11.331 "zone_append": false, 00:04:11.331 "compare": false, 00:04:11.331 "compare_and_write": false, 00:04:11.331 "abort": true, 00:04:11.331 "seek_hole": false, 00:04:11.331 "seek_data": false, 00:04:11.331 "copy": true, 00:04:11.331 "nvme_iov_md": false 00:04:11.331 }, 00:04:11.331 "memory_domains": [ 00:04:11.331 { 00:04:11.331 "dma_device_id": "system", 00:04:11.331 "dma_device_type": 1 00:04:11.331 }, 00:04:11.331 { 00:04:11.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.331 "dma_device_type": 2 00:04:11.331 } 00:04:11.331 ], 00:04:11.331 "driver_specific": {} 00:04:11.331 }, 00:04:11.331 { 00:04:11.331 "name": "Passthru0", 00:04:11.331 "aliases": [ 00:04:11.331 "8d64d3b3-27bd-52d5-8275-ae3aa73bea39" 00:04:11.331 ], 00:04:11.331 "product_name": "passthru", 00:04:11.331 "block_size": 512, 00:04:11.331 "num_blocks": 16384, 00:04:11.331 "uuid": "8d64d3b3-27bd-52d5-8275-ae3aa73bea39", 00:04:11.331 "assigned_rate_limits": { 00:04:11.331 "rw_ios_per_sec": 0, 00:04:11.331 "rw_mbytes_per_sec": 0, 00:04:11.331 "r_mbytes_per_sec": 0, 00:04:11.331 "w_mbytes_per_sec": 0 00:04:11.331 }, 00:04:11.331 "claimed": false, 00:04:11.331 "zoned": false, 00:04:11.331 "supported_io_types": { 00:04:11.331 "read": true, 00:04:11.331 "write": true, 00:04:11.331 "unmap": true, 00:04:11.331 "flush": true, 00:04:11.331 "reset": true, 00:04:11.331 "nvme_admin": false, 00:04:11.331 "nvme_io": false, 00:04:11.331 "nvme_io_md": false, 00:04:11.331 "write_zeroes": true, 00:04:11.331 "zcopy": true, 00:04:11.331 "get_zone_info": false, 00:04:11.331 "zone_management": false, 00:04:11.331 "zone_append": false, 00:04:11.331 "compare": false, 00:04:11.331 "compare_and_write": false, 00:04:11.331 "abort": true, 00:04:11.331 "seek_hole": false, 00:04:11.331 "seek_data": false, 00:04:11.331 "copy": true, 00:04:11.331 "nvme_iov_md": false 00:04:11.331 }, 00:04:11.331 "memory_domains": [ 00:04:11.331 { 00:04:11.331 "dma_device_id": "system", 00:04:11.331 "dma_device_type": 1 00:04:11.331 }, 00:04:11.331 { 00:04:11.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.331 "dma_device_type": 2 00:04:11.331 } 00:04:11.331 ], 00:04:11.331 "driver_specific": { 00:04:11.331 "passthru": { 00:04:11.331 "name": "Passthru0", 00:04:11.331 "base_bdev_name": "Malloc0" 00:04:11.331 } 00:04:11.331 } 00:04:11.331 } 00:04:11.331 ]' 00:04:11.331 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:11.331 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.331 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.331 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.331 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.331 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.331 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:11.331 ************************************ 00:04:11.331 END TEST rpc_integrity 00:04:11.331 ************************************ 00:04:11.331 13:16:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.331 00:04:11.331 real 0m0.248s 00:04:11.331 user 0m0.139s 00:04:11.331 sys 0m0.027s 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.331 13:16:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.331 13:16:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:11.331 13:16:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.331 13:16:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.331 13:16:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.331 ************************************ 00:04:11.331 START TEST rpc_plugins 00:04:11.331 ************************************ 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:11.331 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.331 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:11.331 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.331 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:11.331 { 00:04:11.331 "name": "Malloc1", 00:04:11.331 "aliases": [ 00:04:11.331 "17524e6c-fa1c-4d33-91df-84a497db64f7" 00:04:11.331 ], 00:04:11.331 "product_name": "Malloc disk", 00:04:11.331 "block_size": 4096, 00:04:11.331 "num_blocks": 256, 00:04:11.331 "uuid": "17524e6c-fa1c-4d33-91df-84a497db64f7", 00:04:11.331 "assigned_rate_limits": { 00:04:11.331 "rw_ios_per_sec": 0, 00:04:11.331 "rw_mbytes_per_sec": 0, 00:04:11.331 "r_mbytes_per_sec": 0, 00:04:11.331 "w_mbytes_per_sec": 0 00:04:11.331 }, 00:04:11.331 "claimed": false, 00:04:11.331 "zoned": false, 00:04:11.331 "supported_io_types": { 00:04:11.331 "read": true, 00:04:11.331 "write": true, 00:04:11.331 "unmap": true, 00:04:11.331 "flush": true, 00:04:11.331 "reset": true, 00:04:11.331 "nvme_admin": false, 00:04:11.331 "nvme_io": false, 00:04:11.331 "nvme_io_md": false, 00:04:11.331 "write_zeroes": true, 00:04:11.331 "zcopy": true, 00:04:11.331 "get_zone_info": false, 00:04:11.331 "zone_management": false, 00:04:11.331 "zone_append": false, 00:04:11.331 "compare": false, 00:04:11.331 "compare_and_write": false, 00:04:11.331 "abort": true, 00:04:11.331 "seek_hole": false, 00:04:11.331 "seek_data": false, 00:04:11.331 "copy": true, 00:04:11.331 "nvme_iov_md": false 00:04:11.331 }, 00:04:11.331 "memory_domains": [ 00:04:11.331 { 00:04:11.331 "dma_device_id": "system", 00:04:11.331 "dma_device_type": 1 00:04:11.331 }, 00:04:11.331 { 00:04:11.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.331 "dma_device_type": 2 00:04:11.331 } 00:04:11.331 ], 00:04:11.331 "driver_specific": {} 00:04:11.331 } 00:04:11.331 ]' 00:04:11.331 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:11.331 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:11.331 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.331 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.331 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.331 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:11.331 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:11.593 ************************************ 00:04:11.593 END TEST rpc_plugins 00:04:11.593 ************************************ 00:04:11.593 13:16:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:11.593 00:04:11.593 real 0m0.117s 00:04:11.593 user 0m0.064s 00:04:11.593 sys 0m0.017s 00:04:11.593 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.593 13:16:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.593 13:16:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:11.593 13:16:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.593 13:16:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.593 13:16:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.593 ************************************ 00:04:11.593 START TEST rpc_trace_cmd_test 00:04:11.593 ************************************ 00:04:11.593 13:16:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:11.593 13:16:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:11.593 13:16:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:11.593 13:16:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.593 13:16:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.593 13:16:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.593 13:16:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:11.593 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57091", 00:04:11.593 "tpoint_group_mask": "0x8", 00:04:11.593 "iscsi_conn": { 00:04:11.593 "mask": "0x2", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "scsi": { 00:04:11.593 "mask": "0x4", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "bdev": { 00:04:11.593 "mask": "0x8", 00:04:11.593 "tpoint_mask": "0xffffffffffffffff" 00:04:11.593 }, 00:04:11.593 "nvmf_rdma": { 00:04:11.593 "mask": "0x10", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "nvmf_tcp": { 00:04:11.593 "mask": "0x20", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "ftl": { 00:04:11.593 "mask": "0x40", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "blobfs": { 00:04:11.593 "mask": "0x80", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "dsa": { 00:04:11.593 "mask": "0x200", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "thread": { 00:04:11.593 "mask": "0x400", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "nvme_pcie": { 00:04:11.593 "mask": "0x800", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "iaa": { 00:04:11.593 "mask": "0x1000", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "nvme_tcp": { 00:04:11.593 "mask": "0x2000", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "bdev_nvme": { 00:04:11.593 "mask": "0x4000", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "sock": { 00:04:11.593 "mask": "0x8000", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "blob": { 00:04:11.593 "mask": "0x10000", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "bdev_raid": { 00:04:11.593 "mask": "0x20000", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 }, 00:04:11.593 "scheduler": { 00:04:11.593 "mask": "0x40000", 00:04:11.593 "tpoint_mask": "0x0" 00:04:11.593 } 00:04:11.593 }' 00:04:11.593 13:16:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:11.593 13:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:11.594 13:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:11.594 13:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:11.594 13:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:11.594 13:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:11.594 13:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:11.594 13:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:11.594 13:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:11.594 ************************************ 00:04:11.594 END TEST rpc_trace_cmd_test 00:04:11.594 ************************************ 00:04:11.594 13:17:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:11.594 00:04:11.594 real 0m0.172s 00:04:11.594 user 0m0.140s 00:04:11.594 sys 0m0.024s 00:04:11.594 13:17:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.594 13:17:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.856 13:17:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:11.856 13:17:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:11.856 13:17:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:11.856 13:17:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.856 13:17:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.856 13:17:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.856 ************************************ 00:04:11.856 START TEST rpc_daemon_integrity 00:04:11.856 ************************************ 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.856 { 00:04:11.856 "name": "Malloc2", 00:04:11.856 "aliases": [ 00:04:11.856 "2961dc05-951c-44d0-9c85-aebedc0acc79" 00:04:11.856 ], 00:04:11.856 "product_name": "Malloc disk", 00:04:11.856 "block_size": 512, 00:04:11.856 "num_blocks": 16384, 00:04:11.856 "uuid": "2961dc05-951c-44d0-9c85-aebedc0acc79", 00:04:11.856 "assigned_rate_limits": { 00:04:11.856 "rw_ios_per_sec": 0, 00:04:11.856 "rw_mbytes_per_sec": 0, 00:04:11.856 "r_mbytes_per_sec": 0, 00:04:11.856 "w_mbytes_per_sec": 0 00:04:11.856 }, 00:04:11.856 "claimed": false, 00:04:11.856 "zoned": false, 00:04:11.856 "supported_io_types": { 00:04:11.856 "read": true, 00:04:11.856 "write": true, 00:04:11.856 "unmap": true, 00:04:11.856 "flush": true, 00:04:11.856 "reset": true, 00:04:11.856 "nvme_admin": false, 00:04:11.856 "nvme_io": false, 00:04:11.856 "nvme_io_md": false, 00:04:11.856 "write_zeroes": true, 00:04:11.856 "zcopy": true, 00:04:11.856 "get_zone_info": false, 00:04:11.856 "zone_management": false, 00:04:11.856 "zone_append": false, 00:04:11.856 "compare": false, 00:04:11.856 "compare_and_write": false, 00:04:11.856 "abort": true, 00:04:11.856 "seek_hole": false, 00:04:11.856 "seek_data": false, 00:04:11.856 "copy": true, 00:04:11.856 "nvme_iov_md": false 00:04:11.856 }, 00:04:11.856 "memory_domains": [ 00:04:11.856 { 00:04:11.856 "dma_device_id": "system", 00:04:11.856 "dma_device_type": 1 00:04:11.856 }, 00:04:11.856 { 00:04:11.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.856 "dma_device_type": 2 00:04:11.856 } 00:04:11.856 ], 00:04:11.856 "driver_specific": {} 00:04:11.856 } 00:04:11.856 ]' 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.856 [2024-11-26 13:17:00.273384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:11.856 [2024-11-26 13:17:00.273438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.856 [2024-11-26 13:17:00.273471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:11.856 [2024-11-26 13:17:00.273485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.856 [2024-11-26 13:17:00.275763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.856 [2024-11-26 13:17:00.275927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.856 Passthru0 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.856 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.856 { 00:04:11.856 "name": "Malloc2", 00:04:11.856 "aliases": [ 00:04:11.856 "2961dc05-951c-44d0-9c85-aebedc0acc79" 00:04:11.856 ], 00:04:11.856 "product_name": "Malloc disk", 00:04:11.856 "block_size": 512, 00:04:11.856 "num_blocks": 16384, 00:04:11.856 "uuid": "2961dc05-951c-44d0-9c85-aebedc0acc79", 00:04:11.856 "assigned_rate_limits": { 00:04:11.856 "rw_ios_per_sec": 0, 00:04:11.856 "rw_mbytes_per_sec": 0, 00:04:11.856 "r_mbytes_per_sec": 0, 00:04:11.856 "w_mbytes_per_sec": 0 00:04:11.856 }, 00:04:11.856 "claimed": true, 00:04:11.856 "claim_type": "exclusive_write", 00:04:11.856 "zoned": false, 00:04:11.856 "supported_io_types": { 00:04:11.856 "read": true, 00:04:11.856 "write": true, 00:04:11.856 "unmap": true, 00:04:11.856 "flush": true, 00:04:11.856 "reset": true, 00:04:11.856 "nvme_admin": false, 00:04:11.856 "nvme_io": false, 00:04:11.856 "nvme_io_md": false, 00:04:11.856 "write_zeroes": true, 00:04:11.856 "zcopy": true, 00:04:11.856 "get_zone_info": false, 00:04:11.856 "zone_management": false, 00:04:11.856 "zone_append": false, 00:04:11.856 "compare": false, 00:04:11.856 "compare_and_write": false, 00:04:11.856 "abort": true, 00:04:11.856 "seek_hole": false, 00:04:11.856 "seek_data": false, 00:04:11.856 "copy": true, 00:04:11.857 "nvme_iov_md": false 00:04:11.857 }, 00:04:11.857 "memory_domains": [ 00:04:11.857 { 00:04:11.857 "dma_device_id": "system", 00:04:11.857 "dma_device_type": 1 00:04:11.857 }, 00:04:11.857 { 00:04:11.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.857 "dma_device_type": 2 00:04:11.857 } 00:04:11.857 ], 00:04:11.857 "driver_specific": {} 00:04:11.857 }, 00:04:11.857 { 00:04:11.857 "name": "Passthru0", 00:04:11.857 "aliases": [ 00:04:11.857 "88d8eb1f-eb22-5c91-98fc-4e95b9d71dd0" 00:04:11.857 ], 00:04:11.857 "product_name": "passthru", 00:04:11.857 "block_size": 512, 00:04:11.857 "num_blocks": 16384, 00:04:11.857 "uuid": "88d8eb1f-eb22-5c91-98fc-4e95b9d71dd0", 00:04:11.857 "assigned_rate_limits": { 00:04:11.857 "rw_ios_per_sec": 0, 00:04:11.857 "rw_mbytes_per_sec": 0, 00:04:11.857 "r_mbytes_per_sec": 0, 00:04:11.857 "w_mbytes_per_sec": 0 00:04:11.857 }, 00:04:11.857 "claimed": false, 00:04:11.857 "zoned": false, 00:04:11.857 "supported_io_types": { 00:04:11.857 "read": true, 00:04:11.857 "write": true, 00:04:11.857 "unmap": true, 00:04:11.857 "flush": true, 00:04:11.857 "reset": true, 00:04:11.857 "nvme_admin": false, 00:04:11.857 "nvme_io": false, 00:04:11.857 "nvme_io_md": false, 00:04:11.857 "write_zeroes": true, 00:04:11.857 "zcopy": true, 00:04:11.857 "get_zone_info": false, 00:04:11.857 "zone_management": false, 00:04:11.857 "zone_append": false, 00:04:11.857 "compare": false, 00:04:11.857 "compare_and_write": false, 00:04:11.857 "abort": true, 00:04:11.857 "seek_hole": false, 00:04:11.857 "seek_data": false, 00:04:11.857 "copy": true, 00:04:11.857 "nvme_iov_md": false 00:04:11.857 }, 00:04:11.857 "memory_domains": [ 00:04:11.857 { 00:04:11.857 "dma_device_id": "system", 00:04:11.857 "dma_device_type": 1 00:04:11.857 }, 00:04:11.857 { 00:04:11.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.857 "dma_device_type": 2 00:04:11.857 } 00:04:11.857 ], 00:04:11.857 "driver_specific": { 00:04:11.857 "passthru": { 00:04:11.857 "name": "Passthru0", 00:04:11.857 "base_bdev_name": "Malloc2" 00:04:11.857 } 00:04:11.857 } 00:04:11.857 } 00:04:11.857 ]' 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:11.857 ************************************ 00:04:11.857 END TEST rpc_daemon_integrity 00:04:11.857 ************************************ 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.857 00:04:11.857 real 0m0.238s 00:04:11.857 user 0m0.123s 00:04:11.857 sys 0m0.037s 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.857 13:17:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.119 13:17:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:12.119 13:17:00 rpc -- rpc/rpc.sh@84 -- # killprocess 57091 00:04:12.119 13:17:00 rpc -- common/autotest_common.sh@954 -- # '[' -z 57091 ']' 00:04:12.119 13:17:00 rpc -- common/autotest_common.sh@958 -- # kill -0 57091 00:04:12.119 13:17:00 rpc -- common/autotest_common.sh@959 -- # uname 00:04:12.119 13:17:00 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.119 13:17:00 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57091 00:04:12.119 killing process with pid 57091 00:04:12.119 13:17:00 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.119 13:17:00 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.119 13:17:00 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57091' 00:04:12.119 13:17:00 rpc -- common/autotest_common.sh@973 -- # kill 57091 00:04:12.119 13:17:00 rpc -- common/autotest_common.sh@978 -- # wait 57091 00:04:13.504 00:04:13.504 real 0m3.610s 00:04:13.504 user 0m3.989s 00:04:13.504 sys 0m0.630s 00:04:13.504 13:17:02 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.504 13:17:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.504 ************************************ 00:04:13.504 END TEST rpc 00:04:13.504 ************************************ 00:04:13.765 13:17:02 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:13.765 13:17:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.765 13:17:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.765 13:17:02 -- common/autotest_common.sh@10 -- # set +x 00:04:13.765 ************************************ 00:04:13.765 START TEST skip_rpc 00:04:13.765 ************************************ 00:04:13.765 13:17:02 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:13.765 * Looking for test storage... 00:04:13.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.765 13:17:02 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:13.765 13:17:02 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:13.765 13:17:02 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:13.765 13:17:02 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.765 13:17:02 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:13.766 13:17:02 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.766 13:17:02 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:13.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.766 --rc genhtml_branch_coverage=1 00:04:13.766 --rc genhtml_function_coverage=1 00:04:13.766 --rc genhtml_legend=1 00:04:13.766 --rc geninfo_all_blocks=1 00:04:13.766 --rc geninfo_unexecuted_blocks=1 00:04:13.766 00:04:13.766 ' 00:04:13.766 13:17:02 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:13.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.766 --rc genhtml_branch_coverage=1 00:04:13.766 --rc genhtml_function_coverage=1 00:04:13.766 --rc genhtml_legend=1 00:04:13.766 --rc geninfo_all_blocks=1 00:04:13.766 --rc geninfo_unexecuted_blocks=1 00:04:13.766 00:04:13.766 ' 00:04:13.766 13:17:02 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:13.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.766 --rc genhtml_branch_coverage=1 00:04:13.766 --rc genhtml_function_coverage=1 00:04:13.766 --rc genhtml_legend=1 00:04:13.766 --rc geninfo_all_blocks=1 00:04:13.766 --rc geninfo_unexecuted_blocks=1 00:04:13.766 00:04:13.766 ' 00:04:13.766 13:17:02 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:13.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.766 --rc genhtml_branch_coverage=1 00:04:13.766 --rc genhtml_function_coverage=1 00:04:13.766 --rc genhtml_legend=1 00:04:13.766 --rc geninfo_all_blocks=1 00:04:13.766 --rc geninfo_unexecuted_blocks=1 00:04:13.766 00:04:13.766 ' 00:04:13.766 13:17:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:13.766 13:17:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:13.766 13:17:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:13.766 13:17:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.766 13:17:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.766 13:17:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.766 ************************************ 00:04:13.766 START TEST skip_rpc 00:04:13.766 ************************************ 00:04:13.766 13:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:13.766 13:17:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57309 00:04:13.766 13:17:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.766 13:17:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:13.766 13:17:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:13.766 [2024-11-26 13:17:02.325731] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:13.766 [2024-11-26 13:17:02.325853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57309 ] 00:04:14.027 [2024-11-26 13:17:02.488174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.027 [2024-11-26 13:17:02.592530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57309 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57309 ']' 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57309 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57309 00:04:19.319 killing process with pid 57309 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57309' 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57309 00:04:19.319 13:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57309 00:04:20.259 00:04:20.259 real 0m6.208s 00:04:20.259 user 0m5.827s 00:04:20.259 sys 0m0.271s 00:04:20.259 13:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.259 13:17:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.259 ************************************ 00:04:20.259 END TEST skip_rpc 00:04:20.259 ************************************ 00:04:20.259 13:17:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:20.259 13:17:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.259 13:17:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.259 13:17:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.259 ************************************ 00:04:20.259 START TEST skip_rpc_with_json 00:04:20.259 ************************************ 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57402 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57402 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57402 ']' 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.259 13:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.259 [2024-11-26 13:17:08.583163] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:20.259 [2024-11-26 13:17:08.583293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57402 ] 00:04:20.259 [2024-11-26 13:17:08.741091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.519 [2024-11-26 13:17:08.828090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.090 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.090 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:21.090 13:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:21.090 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.090 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.090 [2024-11-26 13:17:09.416461] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:21.090 request: 00:04:21.090 { 00:04:21.090 "trtype": "tcp", 00:04:21.090 "method": "nvmf_get_transports", 00:04:21.090 "req_id": 1 00:04:21.090 } 00:04:21.090 Got JSON-RPC error response 00:04:21.090 response: 00:04:21.090 { 00:04:21.090 "code": -19, 00:04:21.090 "message": "No such device" 00:04:21.090 } 00:04:21.090 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:21.090 13:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:21.091 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.091 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.091 [2024-11-26 13:17:09.424559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.091 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.091 13:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:21.091 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.091 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.091 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.091 13:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:21.091 { 00:04:21.091 "subsystems": [ 00:04:21.091 { 00:04:21.091 "subsystem": "fsdev", 00:04:21.091 "config": [ 00:04:21.091 { 00:04:21.091 "method": "fsdev_set_opts", 00:04:21.091 "params": { 00:04:21.091 "fsdev_io_pool_size": 65535, 00:04:21.091 "fsdev_io_cache_size": 256 00:04:21.091 } 00:04:21.091 } 00:04:21.091 ] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "keyring", 00:04:21.091 "config": [] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "iobuf", 00:04:21.091 "config": [ 00:04:21.091 { 00:04:21.091 "method": "iobuf_set_options", 00:04:21.091 "params": { 00:04:21.091 "small_pool_count": 8192, 00:04:21.091 "large_pool_count": 1024, 00:04:21.091 "small_bufsize": 8192, 00:04:21.091 "large_bufsize": 135168, 00:04:21.091 "enable_numa": false 00:04:21.091 } 00:04:21.091 } 00:04:21.091 ] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "sock", 00:04:21.091 "config": [ 00:04:21.091 { 00:04:21.091 "method": "sock_set_default_impl", 00:04:21.091 "params": { 00:04:21.091 "impl_name": "posix" 00:04:21.091 } 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "method": "sock_impl_set_options", 00:04:21.091 "params": { 00:04:21.091 "impl_name": "ssl", 00:04:21.091 "recv_buf_size": 4096, 00:04:21.091 "send_buf_size": 4096, 00:04:21.091 "enable_recv_pipe": true, 00:04:21.091 "enable_quickack": false, 00:04:21.091 "enable_placement_id": 0, 00:04:21.091 "enable_zerocopy_send_server": true, 00:04:21.091 "enable_zerocopy_send_client": false, 00:04:21.091 "zerocopy_threshold": 0, 00:04:21.091 "tls_version": 0, 00:04:21.091 "enable_ktls": false 00:04:21.091 } 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "method": "sock_impl_set_options", 00:04:21.091 "params": { 00:04:21.091 "impl_name": "posix", 00:04:21.091 "recv_buf_size": 2097152, 00:04:21.091 "send_buf_size": 2097152, 00:04:21.091 "enable_recv_pipe": true, 00:04:21.091 "enable_quickack": false, 00:04:21.091 "enable_placement_id": 0, 00:04:21.091 "enable_zerocopy_send_server": true, 00:04:21.091 "enable_zerocopy_send_client": false, 00:04:21.091 "zerocopy_threshold": 0, 00:04:21.091 "tls_version": 0, 00:04:21.091 "enable_ktls": false 00:04:21.091 } 00:04:21.091 } 00:04:21.091 ] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "vmd", 00:04:21.091 "config": [] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "accel", 00:04:21.091 "config": [ 00:04:21.091 { 00:04:21.091 "method": "accel_set_options", 00:04:21.091 "params": { 00:04:21.091 "small_cache_size": 128, 00:04:21.091 "large_cache_size": 16, 00:04:21.091 "task_count": 2048, 00:04:21.091 "sequence_count": 2048, 00:04:21.091 "buf_count": 2048 00:04:21.091 } 00:04:21.091 } 00:04:21.091 ] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "bdev", 00:04:21.091 "config": [ 00:04:21.091 { 00:04:21.091 "method": "bdev_set_options", 00:04:21.091 "params": { 00:04:21.091 "bdev_io_pool_size": 65535, 00:04:21.091 "bdev_io_cache_size": 256, 00:04:21.091 "bdev_auto_examine": true, 00:04:21.091 "iobuf_small_cache_size": 128, 00:04:21.091 "iobuf_large_cache_size": 16 00:04:21.091 } 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "method": "bdev_raid_set_options", 00:04:21.091 "params": { 00:04:21.091 "process_window_size_kb": 1024, 00:04:21.091 "process_max_bandwidth_mb_sec": 0 00:04:21.091 } 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "method": "bdev_iscsi_set_options", 00:04:21.091 "params": { 00:04:21.091 "timeout_sec": 30 00:04:21.091 } 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "method": "bdev_nvme_set_options", 00:04:21.091 "params": { 00:04:21.091 "action_on_timeout": "none", 00:04:21.091 "timeout_us": 0, 00:04:21.091 "timeout_admin_us": 0, 00:04:21.091 "keep_alive_timeout_ms": 10000, 00:04:21.091 "arbitration_burst": 0, 00:04:21.091 "low_priority_weight": 0, 00:04:21.091 "medium_priority_weight": 0, 00:04:21.091 "high_priority_weight": 0, 00:04:21.091 "nvme_adminq_poll_period_us": 10000, 00:04:21.091 "nvme_ioq_poll_period_us": 0, 00:04:21.091 "io_queue_requests": 0, 00:04:21.091 "delay_cmd_submit": true, 00:04:21.091 "transport_retry_count": 4, 00:04:21.091 "bdev_retry_count": 3, 00:04:21.091 "transport_ack_timeout": 0, 00:04:21.091 "ctrlr_loss_timeout_sec": 0, 00:04:21.091 "reconnect_delay_sec": 0, 00:04:21.091 "fast_io_fail_timeout_sec": 0, 00:04:21.091 "disable_auto_failback": false, 00:04:21.091 "generate_uuids": false, 00:04:21.091 "transport_tos": 0, 00:04:21.091 "nvme_error_stat": false, 00:04:21.091 "rdma_srq_size": 0, 00:04:21.091 "io_path_stat": false, 00:04:21.091 "allow_accel_sequence": false, 00:04:21.091 "rdma_max_cq_size": 0, 00:04:21.091 "rdma_cm_event_timeout_ms": 0, 00:04:21.091 "dhchap_digests": [ 00:04:21.091 "sha256", 00:04:21.091 "sha384", 00:04:21.091 "sha512" 00:04:21.091 ], 00:04:21.091 "dhchap_dhgroups": [ 00:04:21.091 "null", 00:04:21.091 "ffdhe2048", 00:04:21.091 "ffdhe3072", 00:04:21.091 "ffdhe4096", 00:04:21.091 "ffdhe6144", 00:04:21.091 "ffdhe8192" 00:04:21.091 ] 00:04:21.091 } 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "method": "bdev_nvme_set_hotplug", 00:04:21.091 "params": { 00:04:21.091 "period_us": 100000, 00:04:21.091 "enable": false 00:04:21.091 } 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "method": "bdev_wait_for_examine" 00:04:21.091 } 00:04:21.091 ] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "scsi", 00:04:21.091 "config": null 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "scheduler", 00:04:21.091 "config": [ 00:04:21.091 { 00:04:21.091 "method": "framework_set_scheduler", 00:04:21.091 "params": { 00:04:21.091 "name": "static" 00:04:21.091 } 00:04:21.091 } 00:04:21.091 ] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "vhost_scsi", 00:04:21.091 "config": [] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "vhost_blk", 00:04:21.091 "config": [] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "ublk", 00:04:21.091 "config": [] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "nbd", 00:04:21.091 "config": [] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "nvmf", 00:04:21.091 "config": [ 00:04:21.091 { 00:04:21.091 "method": "nvmf_set_config", 00:04:21.091 "params": { 00:04:21.091 "discovery_filter": "match_any", 00:04:21.091 "admin_cmd_passthru": { 00:04:21.091 "identify_ctrlr": false 00:04:21.091 }, 00:04:21.091 "dhchap_digests": [ 00:04:21.091 "sha256", 00:04:21.091 "sha384", 00:04:21.091 "sha512" 00:04:21.091 ], 00:04:21.091 "dhchap_dhgroups": [ 00:04:21.091 "null", 00:04:21.091 "ffdhe2048", 00:04:21.091 "ffdhe3072", 00:04:21.091 "ffdhe4096", 00:04:21.091 "ffdhe6144", 00:04:21.091 "ffdhe8192" 00:04:21.091 ] 00:04:21.091 } 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "method": "nvmf_set_max_subsystems", 00:04:21.091 "params": { 00:04:21.091 "max_subsystems": 1024 00:04:21.091 } 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "method": "nvmf_set_crdt", 00:04:21.091 "params": { 00:04:21.091 "crdt1": 0, 00:04:21.091 "crdt2": 0, 00:04:21.091 "crdt3": 0 00:04:21.091 } 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "method": "nvmf_create_transport", 00:04:21.091 "params": { 00:04:21.091 "trtype": "TCP", 00:04:21.091 "max_queue_depth": 128, 00:04:21.091 "max_io_qpairs_per_ctrlr": 127, 00:04:21.091 "in_capsule_data_size": 4096, 00:04:21.091 "max_io_size": 131072, 00:04:21.091 "io_unit_size": 131072, 00:04:21.091 "max_aq_depth": 128, 00:04:21.091 "num_shared_buffers": 511, 00:04:21.091 "buf_cache_size": 4294967295, 00:04:21.091 "dif_insert_or_strip": false, 00:04:21.091 "zcopy": false, 00:04:21.091 "c2h_success": true, 00:04:21.091 "sock_priority": 0, 00:04:21.091 "abort_timeout_sec": 1, 00:04:21.091 "ack_timeout": 0, 00:04:21.091 "data_wr_pool_size": 0 00:04:21.091 } 00:04:21.091 } 00:04:21.091 ] 00:04:21.091 }, 00:04:21.091 { 00:04:21.091 "subsystem": "iscsi", 00:04:21.091 "config": [ 00:04:21.091 { 00:04:21.091 "method": "iscsi_set_options", 00:04:21.091 "params": { 00:04:21.091 "node_base": "iqn.2016-06.io.spdk", 00:04:21.091 "max_sessions": 128, 00:04:21.092 "max_connections_per_session": 2, 00:04:21.092 "max_queue_depth": 64, 00:04:21.092 "default_time2wait": 2, 00:04:21.092 "default_time2retain": 20, 00:04:21.092 "first_burst_length": 8192, 00:04:21.092 "immediate_data": true, 00:04:21.092 "allow_duplicated_isid": false, 00:04:21.092 "error_recovery_level": 0, 00:04:21.092 "nop_timeout": 60, 00:04:21.092 "nop_in_interval": 30, 00:04:21.092 "disable_chap": false, 00:04:21.092 "require_chap": false, 00:04:21.092 "mutual_chap": false, 00:04:21.092 "chap_group": 0, 00:04:21.092 "max_large_datain_per_connection": 64, 00:04:21.092 "max_r2t_per_connection": 4, 00:04:21.092 "pdu_pool_size": 36864, 00:04:21.092 "immediate_data_pool_size": 16384, 00:04:21.092 "data_out_pool_size": 2048 00:04:21.092 } 00:04:21.092 } 00:04:21.092 ] 00:04:21.092 } 00:04:21.092 ] 00:04:21.092 } 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57402 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57402 ']' 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57402 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57402 00:04:21.092 killing process with pid 57402 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57402' 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57402 00:04:21.092 13:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57402 00:04:22.474 13:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57436 00:04:22.474 13:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:22.474 13:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57436 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57436 ']' 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57436 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57436 00:04:27.756 killing process with pid 57436 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57436' 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57436 00:04:27.756 13:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57436 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:28.691 00:04:28.691 real 0m8.505s 00:04:28.691 user 0m8.118s 00:04:28.691 sys 0m0.615s 00:04:28.691 ************************************ 00:04:28.691 END TEST skip_rpc_with_json 00:04:28.691 ************************************ 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.691 13:17:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:28.691 13:17:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.691 13:17:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.691 13:17:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.691 ************************************ 00:04:28.691 START TEST skip_rpc_with_delay 00:04:28.691 ************************************ 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.691 [2024-11-26 13:17:17.154048] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:28.691 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.691 ************************************ 00:04:28.692 END TEST skip_rpc_with_delay 00:04:28.692 ************************************ 00:04:28.692 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:28.692 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.692 00:04:28.692 real 0m0.132s 00:04:28.692 user 0m0.075s 00:04:28.692 sys 0m0.056s 00:04:28.692 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.692 13:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:28.692 13:17:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:28.692 13:17:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:28.692 13:17:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:28.692 13:17:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.692 13:17:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.692 13:17:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.950 ************************************ 00:04:28.950 START TEST exit_on_failed_rpc_init 00:04:28.950 ************************************ 00:04:28.950 13:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:28.950 13:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57559 00:04:28.950 13:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57559 00:04:28.950 13:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57559 ']' 00:04:28.950 13:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.950 13:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.950 13:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.950 13:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.950 13:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.950 13:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.950 [2024-11-26 13:17:17.342997] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:28.950 [2024-11-26 13:17:17.343247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57559 ] 00:04:28.950 [2024-11-26 13:17:17.515265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.208 [2024-11-26 13:17:17.629755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.774 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:29.775 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.775 [2024-11-26 13:17:18.305105] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:29.775 [2024-11-26 13:17:18.306176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57577 ] 00:04:30.033 [2024-11-26 13:17:18.471198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.033 [2024-11-26 13:17:18.574607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.033 [2024-11-26 13:17:18.574681] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:30.033 [2024-11-26 13:17:18.574694] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:30.033 [2024-11-26 13:17:18.574708] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57559 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57559 ']' 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57559 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57559 00:04:30.292 killing process with pid 57559 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57559' 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57559 00:04:30.292 13:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57559 00:04:32.191 00:04:32.191 real 0m3.010s 00:04:32.191 user 0m3.356s 00:04:32.191 sys 0m0.421s 00:04:32.191 ************************************ 00:04:32.191 END TEST exit_on_failed_rpc_init 00:04:32.191 ************************************ 00:04:32.191 13:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.191 13:17:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.192 13:17:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.192 ************************************ 00:04:32.192 END TEST skip_rpc 00:04:32.192 ************************************ 00:04:32.192 00:04:32.192 real 0m18.224s 00:04:32.192 user 0m17.510s 00:04:32.192 sys 0m1.553s 00:04:32.192 13:17:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.192 13:17:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.192 13:17:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:32.192 13:17:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.192 13:17:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.192 13:17:20 -- common/autotest_common.sh@10 -- # set +x 00:04:32.192 ************************************ 00:04:32.192 START TEST rpc_client 00:04:32.192 ************************************ 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:32.192 * Looking for test storage... 00:04:32.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.192 13:17:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.192 --rc genhtml_branch_coverage=1 00:04:32.192 --rc genhtml_function_coverage=1 00:04:32.192 --rc genhtml_legend=1 00:04:32.192 --rc geninfo_all_blocks=1 00:04:32.192 --rc geninfo_unexecuted_blocks=1 00:04:32.192 00:04:32.192 ' 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.192 --rc genhtml_branch_coverage=1 00:04:32.192 --rc genhtml_function_coverage=1 00:04:32.192 --rc genhtml_legend=1 00:04:32.192 --rc geninfo_all_blocks=1 00:04:32.192 --rc geninfo_unexecuted_blocks=1 00:04:32.192 00:04:32.192 ' 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.192 --rc genhtml_branch_coverage=1 00:04:32.192 --rc genhtml_function_coverage=1 00:04:32.192 --rc genhtml_legend=1 00:04:32.192 --rc geninfo_all_blocks=1 00:04:32.192 --rc geninfo_unexecuted_blocks=1 00:04:32.192 00:04:32.192 ' 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.192 --rc genhtml_branch_coverage=1 00:04:32.192 --rc genhtml_function_coverage=1 00:04:32.192 --rc genhtml_legend=1 00:04:32.192 --rc geninfo_all_blocks=1 00:04:32.192 --rc geninfo_unexecuted_blocks=1 00:04:32.192 00:04:32.192 ' 00:04:32.192 13:17:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:32.192 OK 00:04:32.192 13:17:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:32.192 00:04:32.192 real 0m0.194s 00:04:32.192 user 0m0.100s 00:04:32.192 sys 0m0.100s 00:04:32.192 ************************************ 00:04:32.192 END TEST rpc_client 00:04:32.192 ************************************ 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.192 13:17:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:32.192 13:17:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:32.192 13:17:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.192 13:17:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.192 13:17:20 -- common/autotest_common.sh@10 -- # set +x 00:04:32.192 ************************************ 00:04:32.192 START TEST json_config 00:04:32.192 ************************************ 00:04:32.192 13:17:20 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:32.192 13:17:20 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.192 13:17:20 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.192 13:17:20 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.192 13:17:20 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.192 13:17:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.192 13:17:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.192 13:17:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.192 13:17:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.192 13:17:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.192 13:17:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.192 13:17:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.192 13:17:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.192 13:17:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.192 13:17:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.192 13:17:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.192 13:17:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:32.192 13:17:20 json_config -- scripts/common.sh@345 -- # : 1 00:04:32.192 13:17:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.192 13:17:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.192 13:17:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:32.192 13:17:20 json_config -- scripts/common.sh@353 -- # local d=1 00:04:32.192 13:17:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.192 13:17:20 json_config -- scripts/common.sh@355 -- # echo 1 00:04:32.192 13:17:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.192 13:17:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:32.192 13:17:20 json_config -- scripts/common.sh@353 -- # local d=2 00:04:32.192 13:17:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.192 13:17:20 json_config -- scripts/common.sh@355 -- # echo 2 00:04:32.192 13:17:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.192 13:17:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.192 13:17:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.192 13:17:20 json_config -- scripts/common.sh@368 -- # return 0 00:04:32.192 13:17:20 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.192 13:17:20 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.192 --rc genhtml_branch_coverage=1 00:04:32.192 --rc genhtml_function_coverage=1 00:04:32.192 --rc genhtml_legend=1 00:04:32.192 --rc geninfo_all_blocks=1 00:04:32.192 --rc geninfo_unexecuted_blocks=1 00:04:32.192 00:04:32.192 ' 00:04:32.192 13:17:20 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.192 --rc genhtml_branch_coverage=1 00:04:32.192 --rc genhtml_function_coverage=1 00:04:32.192 --rc genhtml_legend=1 00:04:32.192 --rc geninfo_all_blocks=1 00:04:32.192 --rc geninfo_unexecuted_blocks=1 00:04:32.192 00:04:32.192 ' 00:04:32.192 13:17:20 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.192 --rc genhtml_branch_coverage=1 00:04:32.192 --rc genhtml_function_coverage=1 00:04:32.192 --rc genhtml_legend=1 00:04:32.192 --rc geninfo_all_blocks=1 00:04:32.192 --rc geninfo_unexecuted_blocks=1 00:04:32.192 00:04:32.192 ' 00:04:32.192 13:17:20 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.192 --rc genhtml_branch_coverage=1 00:04:32.192 --rc genhtml_function_coverage=1 00:04:32.192 --rc genhtml_legend=1 00:04:32.193 --rc geninfo_all_blocks=1 00:04:32.193 --rc geninfo_unexecuted_blocks=1 00:04:32.193 00:04:32.193 ' 00:04:32.193 13:17:20 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.193 13:17:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f17a8710-d966-4f0f-b8ea-4a74bc002ec3 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f17a8710-d966-4f0f-b8ea-4a74bc002ec3 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.452 13:17:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.452 13:17:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.452 13:17:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.452 13:17:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.452 13:17:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.452 13:17:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.452 13:17:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.452 13:17:20 json_config -- paths/export.sh@5 -- # export PATH 00:04:32.452 13:17:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@51 -- # : 0 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.452 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.452 13:17:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.452 13:17:20 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.452 13:17:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:32.452 13:17:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:32.452 13:17:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:32.452 13:17:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:32.452 13:17:20 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:32.452 WARNING: No tests are enabled so not running JSON configuration tests 00:04:32.452 13:17:20 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:32.452 00:04:32.452 real 0m0.148s 00:04:32.452 user 0m0.097s 00:04:32.452 sys 0m0.050s 00:04:32.452 13:17:20 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.452 ************************************ 00:04:32.452 END TEST json_config 00:04:32.452 ************************************ 00:04:32.452 13:17:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.452 13:17:20 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.452 13:17:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.452 13:17:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.452 13:17:20 -- common/autotest_common.sh@10 -- # set +x 00:04:32.452 ************************************ 00:04:32.452 START TEST json_config_extra_key 00:04:32.452 ************************************ 00:04:32.452 13:17:20 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:32.452 13:17:20 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.452 13:17:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.453 13:17:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.453 13:17:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:32.453 13:17:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.453 13:17:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.453 --rc genhtml_branch_coverage=1 00:04:32.453 --rc genhtml_function_coverage=1 00:04:32.453 --rc genhtml_legend=1 00:04:32.453 --rc geninfo_all_blocks=1 00:04:32.453 --rc geninfo_unexecuted_blocks=1 00:04:32.453 00:04:32.453 ' 00:04:32.453 13:17:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.453 --rc genhtml_branch_coverage=1 00:04:32.453 --rc genhtml_function_coverage=1 00:04:32.453 --rc genhtml_legend=1 00:04:32.453 --rc geninfo_all_blocks=1 00:04:32.453 --rc geninfo_unexecuted_blocks=1 00:04:32.453 00:04:32.453 ' 00:04:32.453 13:17:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.453 --rc genhtml_branch_coverage=1 00:04:32.453 --rc genhtml_function_coverage=1 00:04:32.453 --rc genhtml_legend=1 00:04:32.453 --rc geninfo_all_blocks=1 00:04:32.453 --rc geninfo_unexecuted_blocks=1 00:04:32.453 00:04:32.453 ' 00:04:32.453 13:17:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.453 --rc genhtml_branch_coverage=1 00:04:32.453 --rc genhtml_function_coverage=1 00:04:32.453 --rc genhtml_legend=1 00:04:32.453 --rc geninfo_all_blocks=1 00:04:32.453 --rc geninfo_unexecuted_blocks=1 00:04:32.453 00:04:32.453 ' 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f17a8710-d966-4f0f-b8ea-4a74bc002ec3 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f17a8710-d966-4f0f-b8ea-4a74bc002ec3 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.453 13:17:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.453 13:17:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.453 13:17:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.453 13:17:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.453 13:17:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.453 13:17:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.453 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.453 13:17:20 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.453 INFO: launching applications... 00:04:32.453 13:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.453 13:17:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.453 13:17:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.453 13:17:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.453 13:17:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.453 13:17:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.454 13:17:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.454 13:17:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.454 13:17:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57770 00:04:32.454 13:17:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.454 Waiting for target to run... 00:04:32.454 13:17:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57770 /var/tmp/spdk_tgt.sock 00:04:32.454 13:17:20 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57770 ']' 00:04:32.454 13:17:20 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.454 13:17:20 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.454 13:17:20 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:32.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.454 13:17:20 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.454 13:17:20 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.454 13:17:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.711 [2024-11-26 13:17:21.063204] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:32.711 [2024-11-26 13:17:21.063475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57770 ] 00:04:32.969 [2024-11-26 13:17:21.398202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.969 [2024-11-26 13:17:21.490619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.535 13:17:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.535 13:17:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:33.535 13:17:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:33.535 00:04:33.535 13:17:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:33.535 INFO: shutting down applications... 00:04:33.535 13:17:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:33.535 13:17:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:33.535 13:17:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.535 13:17:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57770 ]] 00:04:33.536 13:17:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57770 00:04:33.536 13:17:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.536 13:17:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.536 13:17:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57770 00:04:33.536 13:17:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.102 13:17:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.102 13:17:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.102 13:17:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57770 00:04:34.102 13:17:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.698 13:17:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.698 13:17:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.698 13:17:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57770 00:04:34.698 13:17:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.956 13:17:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.956 13:17:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.956 13:17:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57770 00:04:34.956 13:17:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.527 SPDK target shutdown done 00:04:35.527 Success 00:04:35.527 13:17:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.527 13:17:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.527 13:17:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57770 00:04:35.527 13:17:23 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:35.527 13:17:23 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:35.527 13:17:23 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:35.527 13:17:23 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:35.527 13:17:23 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:35.527 ************************************ 00:04:35.527 END TEST json_config_extra_key 00:04:35.527 ************************************ 00:04:35.527 00:04:35.527 real 0m3.166s 00:04:35.527 user 0m2.731s 00:04:35.527 sys 0m0.412s 00:04:35.527 13:17:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.527 13:17:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.527 13:17:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.527 13:17:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.527 13:17:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.527 13:17:24 -- common/autotest_common.sh@10 -- # set +x 00:04:35.527 ************************************ 00:04:35.527 START TEST alias_rpc 00:04:35.527 ************************************ 00:04:35.527 13:17:24 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:35.790 * Looking for test storage... 00:04:35.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.790 13:17:24 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.790 --rc genhtml_branch_coverage=1 00:04:35.790 --rc genhtml_function_coverage=1 00:04:35.790 --rc genhtml_legend=1 00:04:35.790 --rc geninfo_all_blocks=1 00:04:35.790 --rc geninfo_unexecuted_blocks=1 00:04:35.790 00:04:35.790 ' 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.790 --rc genhtml_branch_coverage=1 00:04:35.790 --rc genhtml_function_coverage=1 00:04:35.790 --rc genhtml_legend=1 00:04:35.790 --rc geninfo_all_blocks=1 00:04:35.790 --rc geninfo_unexecuted_blocks=1 00:04:35.790 00:04:35.790 ' 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.790 --rc genhtml_branch_coverage=1 00:04:35.790 --rc genhtml_function_coverage=1 00:04:35.790 --rc genhtml_legend=1 00:04:35.790 --rc geninfo_all_blocks=1 00:04:35.790 --rc geninfo_unexecuted_blocks=1 00:04:35.790 00:04:35.790 ' 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.790 --rc genhtml_branch_coverage=1 00:04:35.790 --rc genhtml_function_coverage=1 00:04:35.790 --rc genhtml_legend=1 00:04:35.790 --rc geninfo_all_blocks=1 00:04:35.790 --rc geninfo_unexecuted_blocks=1 00:04:35.790 00:04:35.790 ' 00:04:35.790 13:17:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:35.790 13:17:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57869 00:04:35.790 13:17:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.790 13:17:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57869 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57869 ']' 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.790 13:17:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.790 [2024-11-26 13:17:24.295103] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:35.790 [2024-11-26 13:17:24.295494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57869 ] 00:04:36.050 [2024-11-26 13:17:24.460248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.050 [2024-11-26 13:17:24.592823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.992 13:17:25 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.992 13:17:25 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:36.992 13:17:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:36.992 13:17:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57869 00:04:36.992 13:17:25 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57869 ']' 00:04:36.992 13:17:25 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57869 00:04:36.992 13:17:25 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:36.992 13:17:25 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.992 13:17:25 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57869 00:04:37.253 13:17:25 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.253 13:17:25 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.253 13:17:25 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57869' 00:04:37.253 killing process with pid 57869 00:04:37.253 13:17:25 alias_rpc -- common/autotest_common.sh@973 -- # kill 57869 00:04:37.253 13:17:25 alias_rpc -- common/autotest_common.sh@978 -- # wait 57869 00:04:39.165 ************************************ 00:04:39.165 END TEST alias_rpc 00:04:39.165 ************************************ 00:04:39.165 00:04:39.165 real 0m3.242s 00:04:39.165 user 0m3.238s 00:04:39.165 sys 0m0.534s 00:04:39.165 13:17:27 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.165 13:17:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.165 13:17:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:39.165 13:17:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:39.165 13:17:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.165 13:17:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.165 13:17:27 -- common/autotest_common.sh@10 -- # set +x 00:04:39.165 ************************************ 00:04:39.165 START TEST spdkcli_tcp 00:04:39.165 ************************************ 00:04:39.165 13:17:27 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:39.165 * Looking for test storage... 00:04:39.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:39.165 13:17:27 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.165 13:17:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.165 13:17:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.165 13:17:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.165 13:17:27 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:39.165 13:17:27 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.165 13:17:27 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.165 --rc genhtml_branch_coverage=1 00:04:39.165 --rc genhtml_function_coverage=1 00:04:39.165 --rc genhtml_legend=1 00:04:39.166 --rc geninfo_all_blocks=1 00:04:39.166 --rc geninfo_unexecuted_blocks=1 00:04:39.166 00:04:39.166 ' 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.166 --rc genhtml_branch_coverage=1 00:04:39.166 --rc genhtml_function_coverage=1 00:04:39.166 --rc genhtml_legend=1 00:04:39.166 --rc geninfo_all_blocks=1 00:04:39.166 --rc geninfo_unexecuted_blocks=1 00:04:39.166 00:04:39.166 ' 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.166 --rc genhtml_branch_coverage=1 00:04:39.166 --rc genhtml_function_coverage=1 00:04:39.166 --rc genhtml_legend=1 00:04:39.166 --rc geninfo_all_blocks=1 00:04:39.166 --rc geninfo_unexecuted_blocks=1 00:04:39.166 00:04:39.166 ' 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.166 --rc genhtml_branch_coverage=1 00:04:39.166 --rc genhtml_function_coverage=1 00:04:39.166 --rc genhtml_legend=1 00:04:39.166 --rc geninfo_all_blocks=1 00:04:39.166 --rc geninfo_unexecuted_blocks=1 00:04:39.166 00:04:39.166 ' 00:04:39.166 13:17:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:39.166 13:17:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:39.166 13:17:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:39.166 13:17:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:39.166 13:17:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:39.166 13:17:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:39.166 13:17:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.166 13:17:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57965 00:04:39.166 13:17:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57965 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57965 ']' 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.166 13:17:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.166 13:17:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:39.166 [2024-11-26 13:17:27.572345] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:39.166 [2024-11-26 13:17:27.572473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57965 ] 00:04:39.427 [2024-11-26 13:17:27.733018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.427 [2024-11-26 13:17:27.835253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.427 [2024-11-26 13:17:27.835419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.000 13:17:28 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.000 13:17:28 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:40.000 13:17:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57982 00:04:40.000 13:17:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:40.000 13:17:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:40.262 [ 00:04:40.262 "bdev_malloc_delete", 00:04:40.262 "bdev_malloc_create", 00:04:40.262 "bdev_null_resize", 00:04:40.262 "bdev_null_delete", 00:04:40.262 "bdev_null_create", 00:04:40.262 "bdev_nvme_cuse_unregister", 00:04:40.262 "bdev_nvme_cuse_register", 00:04:40.262 "bdev_opal_new_user", 00:04:40.262 "bdev_opal_set_lock_state", 00:04:40.262 "bdev_opal_delete", 00:04:40.262 "bdev_opal_get_info", 00:04:40.262 "bdev_opal_create", 00:04:40.262 "bdev_nvme_opal_revert", 00:04:40.262 "bdev_nvme_opal_init", 00:04:40.262 "bdev_nvme_send_cmd", 00:04:40.262 "bdev_nvme_set_keys", 00:04:40.262 "bdev_nvme_get_path_iostat", 00:04:40.262 "bdev_nvme_get_mdns_discovery_info", 00:04:40.262 "bdev_nvme_stop_mdns_discovery", 00:04:40.262 "bdev_nvme_start_mdns_discovery", 00:04:40.262 "bdev_nvme_set_multipath_policy", 00:04:40.262 "bdev_nvme_set_preferred_path", 00:04:40.262 "bdev_nvme_get_io_paths", 00:04:40.262 "bdev_nvme_remove_error_injection", 00:04:40.262 "bdev_nvme_add_error_injection", 00:04:40.262 "bdev_nvme_get_discovery_info", 00:04:40.262 "bdev_nvme_stop_discovery", 00:04:40.262 "bdev_nvme_start_discovery", 00:04:40.262 "bdev_nvme_get_controller_health_info", 00:04:40.262 "bdev_nvme_disable_controller", 00:04:40.262 "bdev_nvme_enable_controller", 00:04:40.262 "bdev_nvme_reset_controller", 00:04:40.262 "bdev_nvme_get_transport_statistics", 00:04:40.262 "bdev_nvme_apply_firmware", 00:04:40.262 "bdev_nvme_detach_controller", 00:04:40.262 "bdev_nvme_get_controllers", 00:04:40.262 "bdev_nvme_attach_controller", 00:04:40.262 "bdev_nvme_set_hotplug", 00:04:40.262 "bdev_nvme_set_options", 00:04:40.262 "bdev_passthru_delete", 00:04:40.262 "bdev_passthru_create", 00:04:40.262 "bdev_lvol_set_parent_bdev", 00:04:40.262 "bdev_lvol_set_parent", 00:04:40.262 "bdev_lvol_check_shallow_copy", 00:04:40.262 "bdev_lvol_start_shallow_copy", 00:04:40.262 "bdev_lvol_grow_lvstore", 00:04:40.262 "bdev_lvol_get_lvols", 00:04:40.262 "bdev_lvol_get_lvstores", 00:04:40.262 "bdev_lvol_delete", 00:04:40.262 "bdev_lvol_set_read_only", 00:04:40.262 "bdev_lvol_resize", 00:04:40.262 "bdev_lvol_decouple_parent", 00:04:40.262 "bdev_lvol_inflate", 00:04:40.262 "bdev_lvol_rename", 00:04:40.262 "bdev_lvol_clone_bdev", 00:04:40.262 "bdev_lvol_clone", 00:04:40.262 "bdev_lvol_snapshot", 00:04:40.262 "bdev_lvol_create", 00:04:40.262 "bdev_lvol_delete_lvstore", 00:04:40.262 "bdev_lvol_rename_lvstore", 00:04:40.262 "bdev_lvol_create_lvstore", 00:04:40.262 "bdev_raid_set_options", 00:04:40.262 "bdev_raid_remove_base_bdev", 00:04:40.262 "bdev_raid_add_base_bdev", 00:04:40.262 "bdev_raid_delete", 00:04:40.263 "bdev_raid_create", 00:04:40.263 "bdev_raid_get_bdevs", 00:04:40.263 "bdev_error_inject_error", 00:04:40.263 "bdev_error_delete", 00:04:40.263 "bdev_error_create", 00:04:40.263 "bdev_split_delete", 00:04:40.263 "bdev_split_create", 00:04:40.263 "bdev_delay_delete", 00:04:40.263 "bdev_delay_create", 00:04:40.263 "bdev_delay_update_latency", 00:04:40.263 "bdev_zone_block_delete", 00:04:40.263 "bdev_zone_block_create", 00:04:40.263 "blobfs_create", 00:04:40.263 "blobfs_detect", 00:04:40.263 "blobfs_set_cache_size", 00:04:40.263 "bdev_xnvme_delete", 00:04:40.263 "bdev_xnvme_create", 00:04:40.263 "bdev_aio_delete", 00:04:40.263 "bdev_aio_rescan", 00:04:40.263 "bdev_aio_create", 00:04:40.263 "bdev_ftl_set_property", 00:04:40.263 "bdev_ftl_get_properties", 00:04:40.263 "bdev_ftl_get_stats", 00:04:40.263 "bdev_ftl_unmap", 00:04:40.263 "bdev_ftl_unload", 00:04:40.263 "bdev_ftl_delete", 00:04:40.263 "bdev_ftl_load", 00:04:40.263 "bdev_ftl_create", 00:04:40.263 "bdev_virtio_attach_controller", 00:04:40.263 "bdev_virtio_scsi_get_devices", 00:04:40.263 "bdev_virtio_detach_controller", 00:04:40.263 "bdev_virtio_blk_set_hotplug", 00:04:40.263 "bdev_iscsi_delete", 00:04:40.263 "bdev_iscsi_create", 00:04:40.263 "bdev_iscsi_set_options", 00:04:40.263 "accel_error_inject_error", 00:04:40.263 "ioat_scan_accel_module", 00:04:40.263 "dsa_scan_accel_module", 00:04:40.263 "iaa_scan_accel_module", 00:04:40.263 "keyring_file_remove_key", 00:04:40.263 "keyring_file_add_key", 00:04:40.263 "keyring_linux_set_options", 00:04:40.263 "fsdev_aio_delete", 00:04:40.263 "fsdev_aio_create", 00:04:40.263 "iscsi_get_histogram", 00:04:40.263 "iscsi_enable_histogram", 00:04:40.263 "iscsi_set_options", 00:04:40.263 "iscsi_get_auth_groups", 00:04:40.263 "iscsi_auth_group_remove_secret", 00:04:40.263 "iscsi_auth_group_add_secret", 00:04:40.263 "iscsi_delete_auth_group", 00:04:40.263 "iscsi_create_auth_group", 00:04:40.263 "iscsi_set_discovery_auth", 00:04:40.263 "iscsi_get_options", 00:04:40.263 "iscsi_target_node_request_logout", 00:04:40.263 "iscsi_target_node_set_redirect", 00:04:40.263 "iscsi_target_node_set_auth", 00:04:40.263 "iscsi_target_node_add_lun", 00:04:40.263 "iscsi_get_stats", 00:04:40.263 "iscsi_get_connections", 00:04:40.263 "iscsi_portal_group_set_auth", 00:04:40.263 "iscsi_start_portal_group", 00:04:40.263 "iscsi_delete_portal_group", 00:04:40.263 "iscsi_create_portal_group", 00:04:40.263 "iscsi_get_portal_groups", 00:04:40.263 "iscsi_delete_target_node", 00:04:40.263 "iscsi_target_node_remove_pg_ig_maps", 00:04:40.263 "iscsi_target_node_add_pg_ig_maps", 00:04:40.263 "iscsi_create_target_node", 00:04:40.263 "iscsi_get_target_nodes", 00:04:40.263 "iscsi_delete_initiator_group", 00:04:40.263 "iscsi_initiator_group_remove_initiators", 00:04:40.263 "iscsi_initiator_group_add_initiators", 00:04:40.263 "iscsi_create_initiator_group", 00:04:40.263 "iscsi_get_initiator_groups", 00:04:40.263 "nvmf_set_crdt", 00:04:40.263 "nvmf_set_config", 00:04:40.263 "nvmf_set_max_subsystems", 00:04:40.263 "nvmf_stop_mdns_prr", 00:04:40.263 "nvmf_publish_mdns_prr", 00:04:40.263 "nvmf_subsystem_get_listeners", 00:04:40.263 "nvmf_subsystem_get_qpairs", 00:04:40.263 "nvmf_subsystem_get_controllers", 00:04:40.263 "nvmf_get_stats", 00:04:40.263 "nvmf_get_transports", 00:04:40.263 "nvmf_create_transport", 00:04:40.263 "nvmf_get_targets", 00:04:40.263 "nvmf_delete_target", 00:04:40.263 "nvmf_create_target", 00:04:40.263 "nvmf_subsystem_allow_any_host", 00:04:40.263 "nvmf_subsystem_set_keys", 00:04:40.263 "nvmf_subsystem_remove_host", 00:04:40.263 "nvmf_subsystem_add_host", 00:04:40.263 "nvmf_ns_remove_host", 00:04:40.263 "nvmf_ns_add_host", 00:04:40.263 "nvmf_subsystem_remove_ns", 00:04:40.263 "nvmf_subsystem_set_ns_ana_group", 00:04:40.263 "nvmf_subsystem_add_ns", 00:04:40.263 "nvmf_subsystem_listener_set_ana_state", 00:04:40.263 "nvmf_discovery_get_referrals", 00:04:40.263 "nvmf_discovery_remove_referral", 00:04:40.263 "nvmf_discovery_add_referral", 00:04:40.263 "nvmf_subsystem_remove_listener", 00:04:40.263 "nvmf_subsystem_add_listener", 00:04:40.263 "nvmf_delete_subsystem", 00:04:40.263 "nvmf_create_subsystem", 00:04:40.263 "nvmf_get_subsystems", 00:04:40.263 "env_dpdk_get_mem_stats", 00:04:40.263 "nbd_get_disks", 00:04:40.263 "nbd_stop_disk", 00:04:40.263 "nbd_start_disk", 00:04:40.263 "ublk_recover_disk", 00:04:40.263 "ublk_get_disks", 00:04:40.263 "ublk_stop_disk", 00:04:40.263 "ublk_start_disk", 00:04:40.263 "ublk_destroy_target", 00:04:40.263 "ublk_create_target", 00:04:40.263 "virtio_blk_create_transport", 00:04:40.263 "virtio_blk_get_transports", 00:04:40.263 "vhost_controller_set_coalescing", 00:04:40.263 "vhost_get_controllers", 00:04:40.263 "vhost_delete_controller", 00:04:40.263 "vhost_create_blk_controller", 00:04:40.263 "vhost_scsi_controller_remove_target", 00:04:40.263 "vhost_scsi_controller_add_target", 00:04:40.263 "vhost_start_scsi_controller", 00:04:40.263 "vhost_create_scsi_controller", 00:04:40.263 "thread_set_cpumask", 00:04:40.263 "scheduler_set_options", 00:04:40.263 "framework_get_governor", 00:04:40.263 "framework_get_scheduler", 00:04:40.263 "framework_set_scheduler", 00:04:40.263 "framework_get_reactors", 00:04:40.263 "thread_get_io_channels", 00:04:40.263 "thread_get_pollers", 00:04:40.263 "thread_get_stats", 00:04:40.263 "framework_monitor_context_switch", 00:04:40.263 "spdk_kill_instance", 00:04:40.263 "log_enable_timestamps", 00:04:40.263 "log_get_flags", 00:04:40.263 "log_clear_flag", 00:04:40.263 "log_set_flag", 00:04:40.263 "log_get_level", 00:04:40.263 "log_set_level", 00:04:40.263 "log_get_print_level", 00:04:40.263 "log_set_print_level", 00:04:40.263 "framework_enable_cpumask_locks", 00:04:40.263 "framework_disable_cpumask_locks", 00:04:40.263 "framework_wait_init", 00:04:40.263 "framework_start_init", 00:04:40.263 "scsi_get_devices", 00:04:40.263 "bdev_get_histogram", 00:04:40.263 "bdev_enable_histogram", 00:04:40.263 "bdev_set_qos_limit", 00:04:40.263 "bdev_set_qd_sampling_period", 00:04:40.263 "bdev_get_bdevs", 00:04:40.263 "bdev_reset_iostat", 00:04:40.263 "bdev_get_iostat", 00:04:40.263 "bdev_examine", 00:04:40.263 "bdev_wait_for_examine", 00:04:40.263 "bdev_set_options", 00:04:40.263 "accel_get_stats", 00:04:40.263 "accel_set_options", 00:04:40.263 "accel_set_driver", 00:04:40.263 "accel_crypto_key_destroy", 00:04:40.263 "accel_crypto_keys_get", 00:04:40.263 "accel_crypto_key_create", 00:04:40.263 "accel_assign_opc", 00:04:40.263 "accel_get_module_info", 00:04:40.263 "accel_get_opc_assignments", 00:04:40.263 "vmd_rescan", 00:04:40.263 "vmd_remove_device", 00:04:40.263 "vmd_enable", 00:04:40.263 "sock_get_default_impl", 00:04:40.263 "sock_set_default_impl", 00:04:40.263 "sock_impl_set_options", 00:04:40.263 "sock_impl_get_options", 00:04:40.263 "iobuf_get_stats", 00:04:40.263 "iobuf_set_options", 00:04:40.263 "keyring_get_keys", 00:04:40.263 "framework_get_pci_devices", 00:04:40.263 "framework_get_config", 00:04:40.263 "framework_get_subsystems", 00:04:40.263 "fsdev_set_opts", 00:04:40.263 "fsdev_get_opts", 00:04:40.263 "trace_get_info", 00:04:40.263 "trace_get_tpoint_group_mask", 00:04:40.263 "trace_disable_tpoint_group", 00:04:40.263 "trace_enable_tpoint_group", 00:04:40.263 "trace_clear_tpoint_mask", 00:04:40.263 "trace_set_tpoint_mask", 00:04:40.263 "notify_get_notifications", 00:04:40.263 "notify_get_types", 00:04:40.263 "spdk_get_version", 00:04:40.263 "rpc_get_methods" 00:04:40.263 ] 00:04:40.263 13:17:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.263 13:17:28 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:40.263 13:17:28 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57965 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57965 ']' 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57965 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57965 00:04:40.263 killing process with pid 57965 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57965' 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57965 00:04:40.263 13:17:28 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57965 00:04:41.648 ************************************ 00:04:41.648 END TEST spdkcli_tcp 00:04:41.648 ************************************ 00:04:41.648 00:04:41.648 real 0m2.825s 00:04:41.648 user 0m5.130s 00:04:41.648 sys 0m0.421s 00:04:41.648 13:17:30 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.648 13:17:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.648 13:17:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.648 13:17:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.648 13:17:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.648 13:17:30 -- common/autotest_common.sh@10 -- # set +x 00:04:41.648 ************************************ 00:04:41.648 START TEST dpdk_mem_utility 00:04:41.648 ************************************ 00:04:41.648 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.911 * Looking for test storage... 00:04:41.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:41.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.911 13:17:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.911 --rc genhtml_branch_coverage=1 00:04:41.911 --rc genhtml_function_coverage=1 00:04:41.911 --rc genhtml_legend=1 00:04:41.911 --rc geninfo_all_blocks=1 00:04:41.911 --rc geninfo_unexecuted_blocks=1 00:04:41.911 00:04:41.911 ' 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.911 --rc genhtml_branch_coverage=1 00:04:41.911 --rc genhtml_function_coverage=1 00:04:41.911 --rc genhtml_legend=1 00:04:41.911 --rc geninfo_all_blocks=1 00:04:41.911 --rc geninfo_unexecuted_blocks=1 00:04:41.911 00:04:41.911 ' 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.911 --rc genhtml_branch_coverage=1 00:04:41.911 --rc genhtml_function_coverage=1 00:04:41.911 --rc genhtml_legend=1 00:04:41.911 --rc geninfo_all_blocks=1 00:04:41.911 --rc geninfo_unexecuted_blocks=1 00:04:41.911 00:04:41.911 ' 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.911 --rc genhtml_branch_coverage=1 00:04:41.911 --rc genhtml_function_coverage=1 00:04:41.911 --rc genhtml_legend=1 00:04:41.911 --rc geninfo_all_blocks=1 00:04:41.911 --rc geninfo_unexecuted_blocks=1 00:04:41.911 00:04:41.911 ' 00:04:41.911 13:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:41.911 13:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58076 00:04:41.911 13:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58076 00:04:41.911 13:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58076 ']' 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.911 13:17:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.912 [2024-11-26 13:17:30.418978] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:41.912 [2024-11-26 13:17:30.419210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58076 ] 00:04:42.173 [2024-11-26 13:17:30.573519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.173 [2024-11-26 13:17:30.663078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.739 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.739 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:42.739 13:17:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:42.739 13:17:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:42.739 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.739 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.999 { 00:04:42.999 "filename": "/tmp/spdk_mem_dump.txt" 00:04:42.999 } 00:04:42.999 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.000 13:17:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:43.000 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:43.000 1 heaps totaling size 816.000000 MiB 00:04:43.000 size: 816.000000 MiB heap id: 0 00:04:43.000 end heaps---------- 00:04:43.000 9 mempools totaling size 595.772034 MiB 00:04:43.000 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:43.000 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:43.000 size: 92.545471 MiB name: bdev_io_58076 00:04:43.000 size: 50.003479 MiB name: msgpool_58076 00:04:43.000 size: 36.509338 MiB name: fsdev_io_58076 00:04:43.000 size: 21.763794 MiB name: PDU_Pool 00:04:43.000 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:43.000 size: 4.133484 MiB name: evtpool_58076 00:04:43.000 size: 0.026123 MiB name: Session_Pool 00:04:43.000 end mempools------- 00:04:43.000 6 memzones totaling size 4.142822 MiB 00:04:43.000 size: 1.000366 MiB name: RG_ring_0_58076 00:04:43.000 size: 1.000366 MiB name: RG_ring_1_58076 00:04:43.000 size: 1.000366 MiB name: RG_ring_4_58076 00:04:43.000 size: 1.000366 MiB name: RG_ring_5_58076 00:04:43.000 size: 0.125366 MiB name: RG_ring_2_58076 00:04:43.000 size: 0.015991 MiB name: RG_ring_3_58076 00:04:43.000 end memzones------- 00:04:43.000 13:17:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:43.000 heap id: 0 total size: 816.000000 MiB number of busy elements: 321 number of free elements: 18 00:04:43.000 list of free elements. size: 16.789917 MiB 00:04:43.000 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:43.000 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:43.000 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:43.000 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:43.000 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:43.000 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:43.000 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:43.000 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:43.000 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:43.000 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:43.000 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:43.000 element at address: 0x20001ac00000 with size: 0.559265 MiB 00:04:43.000 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:43.000 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:43.000 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:43.000 element at address: 0x200012c00000 with size: 0.443481 MiB 00:04:43.000 element at address: 0x200028000000 with size: 0.391418 MiB 00:04:43.000 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:43.000 list of standard malloc elements. size: 199.289185 MiB 00:04:43.000 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:43.000 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:43.000 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:43.000 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:43.000 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:43.000 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:43.000 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:43.000 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:43.000 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:43.000 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:43.000 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:43.000 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:43.000 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:43.000 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:43.000 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:43.001 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:43.001 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200028064340 with size: 0.000244 MiB 00:04:43.001 element at address: 0x200028064440 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806b100 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:43.001 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:43.002 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:43.002 list of memzone associated elements. size: 599.920898 MiB 00:04:43.002 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:43.002 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:43.002 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:43.002 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:43.002 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:43.002 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58076_0 00:04:43.002 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:43.002 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58076_0 00:04:43.002 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:43.002 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58076_0 00:04:43.002 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:43.002 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:43.002 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:43.002 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:43.002 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:43.002 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58076_0 00:04:43.002 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:43.002 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58076 00:04:43.002 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:43.002 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58076 00:04:43.002 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:43.002 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:43.002 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:43.002 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:43.002 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:43.002 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:43.002 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:43.002 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:43.002 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:43.002 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58076 00:04:43.002 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:43.002 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58076 00:04:43.002 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:43.002 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58076 00:04:43.002 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:43.002 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58076 00:04:43.002 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:43.002 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58076 00:04:43.002 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:43.002 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58076 00:04:43.002 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:43.002 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:43.002 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:43.002 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:43.002 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:43.002 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:43.002 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:43.002 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58076 00:04:43.002 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:43.002 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58076 00:04:43.002 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:43.002 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:43.002 element at address: 0x200028064540 with size: 0.023804 MiB 00:04:43.002 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:43.002 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:43.002 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58076 00:04:43.002 element at address: 0x20002806a6c0 with size: 0.002502 MiB 00:04:43.002 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:43.002 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:43.002 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58076 00:04:43.002 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:43.002 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58076 00:04:43.002 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:43.002 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58076 00:04:43.002 element at address: 0x20002806b200 with size: 0.000366 MiB 00:04:43.002 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:43.002 13:17:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:43.002 13:17:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58076 00:04:43.002 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58076 ']' 00:04:43.002 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58076 00:04:43.002 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:43.002 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.002 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58076 00:04:43.002 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.002 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.002 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58076' 00:04:43.002 killing process with pid 58076 00:04:43.002 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58076 00:04:43.002 13:17:31 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58076 00:04:44.379 00:04:44.379 real 0m2.731s 00:04:44.379 user 0m2.797s 00:04:44.379 sys 0m0.384s 00:04:44.379 13:17:32 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.379 13:17:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.637 ************************************ 00:04:44.637 END TEST dpdk_mem_utility 00:04:44.637 ************************************ 00:04:44.637 13:17:32 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:44.637 13:17:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.637 13:17:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.637 13:17:32 -- common/autotest_common.sh@10 -- # set +x 00:04:44.637 ************************************ 00:04:44.637 START TEST event 00:04:44.637 ************************************ 00:04:44.637 13:17:32 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:44.637 * Looking for test storage... 00:04:44.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:44.637 13:17:33 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.637 13:17:33 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.637 13:17:33 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.637 13:17:33 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.637 13:17:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.637 13:17:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.637 13:17:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.637 13:17:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.637 13:17:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.637 13:17:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.637 13:17:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.637 13:17:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.637 13:17:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.637 13:17:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.637 13:17:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.637 13:17:33 event -- scripts/common.sh@344 -- # case "$op" in 00:04:44.637 13:17:33 event -- scripts/common.sh@345 -- # : 1 00:04:44.637 13:17:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.637 13:17:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.637 13:17:33 event -- scripts/common.sh@365 -- # decimal 1 00:04:44.637 13:17:33 event -- scripts/common.sh@353 -- # local d=1 00:04:44.637 13:17:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.637 13:17:33 event -- scripts/common.sh@355 -- # echo 1 00:04:44.637 13:17:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.637 13:17:33 event -- scripts/common.sh@366 -- # decimal 2 00:04:44.637 13:17:33 event -- scripts/common.sh@353 -- # local d=2 00:04:44.637 13:17:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.637 13:17:33 event -- scripts/common.sh@355 -- # echo 2 00:04:44.637 13:17:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.637 13:17:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.637 13:17:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.637 13:17:33 event -- scripts/common.sh@368 -- # return 0 00:04:44.637 13:17:33 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.637 13:17:33 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.637 --rc genhtml_branch_coverage=1 00:04:44.637 --rc genhtml_function_coverage=1 00:04:44.637 --rc genhtml_legend=1 00:04:44.637 --rc geninfo_all_blocks=1 00:04:44.637 --rc geninfo_unexecuted_blocks=1 00:04:44.638 00:04:44.638 ' 00:04:44.638 13:17:33 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.638 --rc genhtml_branch_coverage=1 00:04:44.638 --rc genhtml_function_coverage=1 00:04:44.638 --rc genhtml_legend=1 00:04:44.638 --rc geninfo_all_blocks=1 00:04:44.638 --rc geninfo_unexecuted_blocks=1 00:04:44.638 00:04:44.638 ' 00:04:44.638 13:17:33 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.638 --rc genhtml_branch_coverage=1 00:04:44.638 --rc genhtml_function_coverage=1 00:04:44.638 --rc genhtml_legend=1 00:04:44.638 --rc geninfo_all_blocks=1 00:04:44.638 --rc geninfo_unexecuted_blocks=1 00:04:44.638 00:04:44.638 ' 00:04:44.638 13:17:33 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.638 --rc genhtml_branch_coverage=1 00:04:44.638 --rc genhtml_function_coverage=1 00:04:44.638 --rc genhtml_legend=1 00:04:44.638 --rc geninfo_all_blocks=1 00:04:44.638 --rc geninfo_unexecuted_blocks=1 00:04:44.638 00:04:44.638 ' 00:04:44.638 13:17:33 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:44.638 13:17:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:44.638 13:17:33 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.638 13:17:33 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:44.638 13:17:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.638 13:17:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.638 ************************************ 00:04:44.638 START TEST event_perf 00:04:44.638 ************************************ 00:04:44.638 13:17:33 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.638 Running I/O for 1 seconds...[2024-11-26 13:17:33.184401] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:44.638 [2024-11-26 13:17:33.184605] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58167 ] 00:04:44.896 [2024-11-26 13:17:33.345195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.896 [2024-11-26 13:17:33.450282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.896 [2024-11-26 13:17:33.450591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.896 [2024-11-26 13:17:33.450789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.896 [2024-11-26 13:17:33.450990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.276 Running I/O for 1 seconds... 00:04:46.276 lcore 0: 202706 00:04:46.276 lcore 1: 202704 00:04:46.276 lcore 2: 202704 00:04:46.276 lcore 3: 202704 00:04:46.276 done. 00:04:46.276 00:04:46.276 real 0m1.468s 00:04:46.276 user 0m4.262s 00:04:46.276 sys 0m0.087s 00:04:46.276 13:17:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.276 13:17:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.276 ************************************ 00:04:46.276 END TEST event_perf 00:04:46.276 ************************************ 00:04:46.276 13:17:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:46.276 13:17:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:46.276 13:17:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.276 13:17:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.276 ************************************ 00:04:46.276 START TEST event_reactor 00:04:46.276 ************************************ 00:04:46.276 13:17:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:46.276 [2024-11-26 13:17:34.706845] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:46.276 [2024-11-26 13:17:34.707056] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58207 ] 00:04:46.537 [2024-11-26 13:17:34.868674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.537 [2024-11-26 13:17:34.970463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.924 test_start 00:04:47.924 oneshot 00:04:47.924 tick 100 00:04:47.924 tick 100 00:04:47.924 tick 250 00:04:47.924 tick 100 00:04:47.924 tick 100 00:04:47.924 tick 250 00:04:47.924 tick 100 00:04:47.924 tick 500 00:04:47.924 tick 100 00:04:47.924 tick 100 00:04:47.924 tick 250 00:04:47.924 tick 100 00:04:47.924 tick 100 00:04:47.924 test_end 00:04:47.924 00:04:47.924 real 0m1.458s 00:04:47.924 user 0m1.289s 00:04:47.924 sys 0m0.058s 00:04:47.924 ************************************ 00:04:47.924 END TEST event_reactor 00:04:47.924 ************************************ 00:04:47.924 13:17:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.924 13:17:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:47.924 13:17:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:47.924 13:17:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:47.924 13:17:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.924 13:17:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.924 ************************************ 00:04:47.924 START TEST event_reactor_perf 00:04:47.924 ************************************ 00:04:47.924 13:17:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:47.924 [2024-11-26 13:17:36.226535] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:47.924 [2024-11-26 13:17:36.226671] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58243 ] 00:04:47.924 [2024-11-26 13:17:36.392524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.182 [2024-11-26 13:17:36.513459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.117 test_start 00:04:49.117 test_end 00:04:49.117 Performance: 316377 events per second 00:04:49.117 ************************************ 00:04:49.117 END TEST event_reactor_perf 00:04:49.117 ************************************ 00:04:49.117 00:04:49.117 real 0m1.475s 00:04:49.117 user 0m1.278s 00:04:49.117 sys 0m0.087s 00:04:49.117 13:17:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.117 13:17:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.375 13:17:37 event -- event/event.sh@49 -- # uname -s 00:04:49.376 13:17:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:49.376 13:17:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:49.376 13:17:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.376 13:17:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.376 13:17:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.376 ************************************ 00:04:49.376 START TEST event_scheduler 00:04:49.376 ************************************ 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:49.376 * Looking for test storage... 00:04:49.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.376 13:17:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.376 --rc genhtml_branch_coverage=1 00:04:49.376 --rc genhtml_function_coverage=1 00:04:49.376 --rc genhtml_legend=1 00:04:49.376 --rc geninfo_all_blocks=1 00:04:49.376 --rc geninfo_unexecuted_blocks=1 00:04:49.376 00:04:49.376 ' 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.376 --rc genhtml_branch_coverage=1 00:04:49.376 --rc genhtml_function_coverage=1 00:04:49.376 --rc genhtml_legend=1 00:04:49.376 --rc geninfo_all_blocks=1 00:04:49.376 --rc geninfo_unexecuted_blocks=1 00:04:49.376 00:04:49.376 ' 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.376 --rc genhtml_branch_coverage=1 00:04:49.376 --rc genhtml_function_coverage=1 00:04:49.376 --rc genhtml_legend=1 00:04:49.376 --rc geninfo_all_blocks=1 00:04:49.376 --rc geninfo_unexecuted_blocks=1 00:04:49.376 00:04:49.376 ' 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.376 --rc genhtml_branch_coverage=1 00:04:49.376 --rc genhtml_function_coverage=1 00:04:49.376 --rc genhtml_legend=1 00:04:49.376 --rc geninfo_all_blocks=1 00:04:49.376 --rc geninfo_unexecuted_blocks=1 00:04:49.376 00:04:49.376 ' 00:04:49.376 13:17:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:49.376 13:17:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58314 00:04:49.376 13:17:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.376 13:17:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58314 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58314 ']' 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.376 13:17:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.376 13:17:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.635 [2024-11-26 13:17:37.942963] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:49.635 [2024-11-26 13:17:37.943088] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58314 ] 00:04:49.635 [2024-11-26 13:17:38.096184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.894 [2024-11-26 13:17:38.202507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.894 [2024-11-26 13:17:38.202580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.894 [2024-11-26 13:17:38.203025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.894 [2024-11-26 13:17:38.203115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.465 13:17:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.465 13:17:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:50.465 13:17:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:50.465 13:17:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.465 13:17:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.465 POWER: Cannot set governor of lcore 0 to userspace 00:04:50.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.465 POWER: Cannot set governor of lcore 0 to performance 00:04:50.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.465 POWER: Cannot set governor of lcore 0 to userspace 00:04:50.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.465 POWER: Cannot set governor of lcore 0 to userspace 00:04:50.465 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:50.465 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:50.465 POWER: Unable to set Power Management Environment for lcore 0 00:04:50.465 [2024-11-26 13:17:38.788555] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:50.465 [2024-11-26 13:17:38.788579] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:50.465 [2024-11-26 13:17:38.788589] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:50.465 [2024-11-26 13:17:38.788605] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:50.465 [2024-11-26 13:17:38.788613] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:50.465 [2024-11-26 13:17:38.788622] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:50.465 13:17:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.465 13:17:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:50.465 13:17:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.465 13:17:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 [2024-11-26 13:17:39.037016] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:50.725 13:17:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.725 13:17:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:50.725 13:17:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.725 13:17:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.725 13:17:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 ************************************ 00:04:50.725 START TEST scheduler_create_thread 00:04:50.725 ************************************ 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 2 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 3 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 4 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 5 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 6 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 7 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 8 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.725 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.725 9 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.726 10 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.726 13:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.660 13:17:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.660 13:17:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:51.660 13:17:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:51.660 13:17:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.660 13:17:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.038 ************************************ 00:04:53.038 END TEST scheduler_create_thread 00:04:53.038 ************************************ 00:04:53.038 13:17:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.038 00:04:53.038 real 0m2.134s 00:04:53.038 user 0m0.015s 00:04:53.038 sys 0m0.006s 00:04:53.038 13:17:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.038 13:17:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.038 13:17:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:53.038 13:17:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58314 00:04:53.038 13:17:41 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58314 ']' 00:04:53.038 13:17:41 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58314 00:04:53.038 13:17:41 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:53.038 13:17:41 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.038 13:17:41 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58314 00:04:53.038 killing process with pid 58314 00:04:53.038 13:17:41 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:53.038 13:17:41 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:53.038 13:17:41 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58314' 00:04:53.038 13:17:41 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58314 00:04:53.038 13:17:41 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58314 00:04:53.299 [2024-11-26 13:17:41.671502] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:53.901 00:04:53.901 real 0m4.497s 00:04:53.901 user 0m7.663s 00:04:53.901 sys 0m0.353s 00:04:53.901 13:17:42 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.901 13:17:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.901 ************************************ 00:04:53.901 END TEST event_scheduler 00:04:53.901 ************************************ 00:04:53.901 13:17:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:53.901 13:17:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:53.901 13:17:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.901 13:17:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.901 13:17:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.901 ************************************ 00:04:53.901 START TEST app_repeat 00:04:53.901 ************************************ 00:04:53.901 13:17:42 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:53.901 13:17:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58409 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.902 Process app_repeat pid: 58409 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58409' 00:04:53.902 spdk_app_start Round 0 00:04:53.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58409 /var/tmp/spdk-nbd.sock 00:04:53.902 13:17:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58409 ']' 00:04:53.902 13:17:42 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:53.902 13:17:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.902 13:17:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.902 13:17:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.902 13:17:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.902 13:17:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.902 [2024-11-26 13:17:42.333251] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:04:53.902 [2024-11-26 13:17:42.333355] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58409 ] 00:04:54.206 [2024-11-26 13:17:42.486347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.206 [2024-11-26 13:17:42.587589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.206 [2024-11-26 13:17:42.587729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.779 13:17:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.779 13:17:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:54.779 13:17:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.039 Malloc0 00:04:55.039 13:17:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.298 Malloc1 00:04:55.298 13:17:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.298 13:17:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.557 /dev/nbd0 00:04:55.557 13:17:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.557 13:17:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.557 1+0 records in 00:04:55.557 1+0 records out 00:04:55.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469467 s, 8.7 MB/s 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.557 13:17:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.557 13:17:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.557 13:17:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.557 13:17:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.557 /dev/nbd1 00:04:55.557 13:17:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.557 13:17:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.557 13:17:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:55.557 13:17:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.557 13:17:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.816 1+0 records in 00:04:55.816 1+0 records out 00:04:55.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256236 s, 16.0 MB/s 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.816 13:17:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.816 { 00:04:55.816 "nbd_device": "/dev/nbd0", 00:04:55.816 "bdev_name": "Malloc0" 00:04:55.816 }, 00:04:55.816 { 00:04:55.816 "nbd_device": "/dev/nbd1", 00:04:55.816 "bdev_name": "Malloc1" 00:04:55.816 } 00:04:55.816 ]' 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.816 { 00:04:55.816 "nbd_device": "/dev/nbd0", 00:04:55.816 "bdev_name": "Malloc0" 00:04:55.816 }, 00:04:55.816 { 00:04:55.816 "nbd_device": "/dev/nbd1", 00:04:55.816 "bdev_name": "Malloc1" 00:04:55.816 } 00:04:55.816 ]' 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.816 /dev/nbd1' 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.816 /dev/nbd1' 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.816 256+0 records in 00:04:55.816 256+0 records out 00:04:55.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00681794 s, 154 MB/s 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.816 256+0 records in 00:04:55.816 256+0 records out 00:04:55.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152397 s, 68.8 MB/s 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.816 13:17:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.075 256+0 records in 00:04:56.075 256+0 records out 00:04:56.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243368 s, 43.1 MB/s 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.075 13:17:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.334 13:17:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.594 13:17:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.594 13:17:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.852 13:17:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.788 [2024-11-26 13:17:46.078250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.788 [2024-11-26 13:17:46.154462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.788 [2024-11-26 13:17:46.154476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.788 [2024-11-26 13:17:46.261311] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.788 [2024-11-26 13:17:46.261372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.321 13:17:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.321 spdk_app_start Round 1 00:05:00.321 13:17:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:00.321 13:17:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58409 /var/tmp/spdk-nbd.sock 00:05:00.321 13:17:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58409 ']' 00:05:00.321 13:17:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.321 13:17:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.321 13:17:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.321 13:17:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.321 13:17:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.321 13:17:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.321 13:17:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:00.321 13:17:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.321 Malloc0 00:05:00.321 13:17:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.580 Malloc1 00:05:00.580 13:17:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.580 13:17:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.841 /dev/nbd0 00:05:00.841 13:17:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.841 13:17:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.841 1+0 records in 00:05:00.841 1+0 records out 00:05:00.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348857 s, 11.7 MB/s 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.841 13:17:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.841 13:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.841 13:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.841 13:17:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.101 /dev/nbd1 00:05:01.101 13:17:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.101 13:17:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.101 1+0 records in 00:05:01.101 1+0 records out 00:05:01.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262021 s, 15.6 MB/s 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:01.101 13:17:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:01.101 13:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.101 13:17:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.101 13:17:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.101 13:17:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.101 13:17:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.362 { 00:05:01.362 "nbd_device": "/dev/nbd0", 00:05:01.362 "bdev_name": "Malloc0" 00:05:01.362 }, 00:05:01.362 { 00:05:01.362 "nbd_device": "/dev/nbd1", 00:05:01.362 "bdev_name": "Malloc1" 00:05:01.362 } 00:05:01.362 ]' 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.362 { 00:05:01.362 "nbd_device": "/dev/nbd0", 00:05:01.362 "bdev_name": "Malloc0" 00:05:01.362 }, 00:05:01.362 { 00:05:01.362 "nbd_device": "/dev/nbd1", 00:05:01.362 "bdev_name": "Malloc1" 00:05:01.362 } 00:05:01.362 ]' 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.362 /dev/nbd1' 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.362 /dev/nbd1' 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.362 256+0 records in 00:05:01.362 256+0 records out 00:05:01.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00828725 s, 127 MB/s 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.362 256+0 records in 00:05:01.362 256+0 records out 00:05:01.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128141 s, 81.8 MB/s 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.362 256+0 records in 00:05:01.362 256+0 records out 00:05:01.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187803 s, 55.8 MB/s 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.362 13:17:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.623 13:17:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.623 13:17:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.623 13:17:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.623 13:17:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.623 13:17:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.623 13:17:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.623 13:17:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.623 13:17:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.623 13:17:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.624 13:17:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.884 13:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.144 13:17:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.144 13:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.144 13:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.144 13:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.144 13:17:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.144 13:17:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.144 13:17:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.144 13:17:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.144 13:17:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.144 13:17:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.404 13:17:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.976 [2024-11-26 13:17:51.311003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.976 [2024-11-26 13:17:51.389079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.976 [2024-11-26 13:17:51.389257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.976 [2024-11-26 13:17:51.492974] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.976 [2024-11-26 13:17:51.493023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.513 spdk_app_start Round 2 00:05:05.513 13:17:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.513 13:17:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:05.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.513 13:17:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58409 /var/tmp/spdk-nbd.sock 00:05:05.513 13:17:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58409 ']' 00:05:05.513 13:17:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.513 13:17:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.513 13:17:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.513 13:17:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.513 13:17:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.513 13:17:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.513 13:17:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.513 13:17:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.770 Malloc0 00:05:05.770 13:17:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.027 Malloc1 00:05:06.027 13:17:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.027 13:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.028 13:17:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.287 /dev/nbd0 00:05:06.287 13:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.287 13:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.287 13:17:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:06.287 13:17:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.287 13:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.287 13:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.287 13:17:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:06.287 13:17:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.288 13:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.288 13:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.288 13:17:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.288 1+0 records in 00:05:06.288 1+0 records out 00:05:06.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154977 s, 26.4 MB/s 00:05:06.288 13:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.288 13:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.288 13:17:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.288 13:17:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.288 13:17:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.288 13:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.288 13:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.288 13:17:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.288 /dev/nbd1 00:05:06.548 13:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.548 13:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.548 1+0 records in 00:05:06.548 1+0 records out 00:05:06.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243248 s, 16.8 MB/s 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.548 13:17:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.548 13:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.548 13:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.548 13:17:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.548 13:17:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.548 13:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.549 13:17:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.549 { 00:05:06.549 "nbd_device": "/dev/nbd0", 00:05:06.549 "bdev_name": "Malloc0" 00:05:06.549 }, 00:05:06.549 { 00:05:06.549 "nbd_device": "/dev/nbd1", 00:05:06.549 "bdev_name": "Malloc1" 00:05:06.549 } 00:05:06.549 ]' 00:05:06.549 13:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.549 13:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.549 { 00:05:06.549 "nbd_device": "/dev/nbd0", 00:05:06.549 "bdev_name": "Malloc0" 00:05:06.549 }, 00:05:06.549 { 00:05:06.549 "nbd_device": "/dev/nbd1", 00:05:06.549 "bdev_name": "Malloc1" 00:05:06.549 } 00:05:06.549 ]' 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.806 /dev/nbd1' 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.806 /dev/nbd1' 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.806 256+0 records in 00:05:06.806 256+0 records out 00:05:06.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0080531 s, 130 MB/s 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.806 256+0 records in 00:05:06.806 256+0 records out 00:05:06.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155556 s, 67.4 MB/s 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.806 256+0 records in 00:05:06.806 256+0 records out 00:05:06.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186028 s, 56.4 MB/s 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.806 13:17:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.807 13:17:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.807 13:17:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.807 13:17:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.807 13:17:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.807 13:17:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.807 13:17:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.807 13:17:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.807 13:17:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.063 13:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.064 13:17:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.064 13:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.064 13:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.064 13:17:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.064 13:17:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.064 13:17:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.064 13:17:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.064 13:17:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.064 13:17:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.320 13:17:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.321 13:17:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.577 13:17:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.142 [2024-11-26 13:17:56.653752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.400 [2024-11-26 13:17:56.730566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.400 [2024-11-26 13:17:56.730740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.400 [2024-11-26 13:17:56.827559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.400 [2024-11-26 13:17:56.827601] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.927 13:17:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58409 /var/tmp/spdk-nbd.sock 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58409 ']' 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:10.927 13:17:59 event.app_repeat -- event/event.sh@39 -- # killprocess 58409 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58409 ']' 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58409 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58409 00:05:10.927 killing process with pid 58409 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58409' 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58409 00:05:10.927 13:17:59 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58409 00:05:11.494 spdk_app_start is called in Round 0. 00:05:11.494 Shutdown signal received, stop current app iteration 00:05:11.494 Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 reinitialization... 00:05:11.494 spdk_app_start is called in Round 1. 00:05:11.494 Shutdown signal received, stop current app iteration 00:05:11.494 Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 reinitialization... 00:05:11.494 spdk_app_start is called in Round 2. 00:05:11.494 Shutdown signal received, stop current app iteration 00:05:11.494 Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 reinitialization... 00:05:11.494 spdk_app_start is called in Round 3. 00:05:11.494 Shutdown signal received, stop current app iteration 00:05:11.494 13:17:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.494 13:17:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.494 00:05:11.494 real 0m17.525s 00:05:11.494 user 0m38.393s 00:05:11.494 sys 0m1.971s 00:05:11.494 13:17:59 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.494 ************************************ 00:05:11.494 END TEST app_repeat 00:05:11.494 ************************************ 00:05:11.494 13:17:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.494 13:17:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.494 13:17:59 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:11.494 13:17:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.494 13:17:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.494 13:17:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.494 ************************************ 00:05:11.494 START TEST cpu_locks 00:05:11.494 ************************************ 00:05:11.494 13:17:59 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:11.494 * Looking for test storage... 00:05:11.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:11.494 13:17:59 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.494 13:17:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.494 13:17:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.494 13:17:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:11.494 13:17:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.494 13:18:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:11.494 13:18:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.494 13:18:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.494 --rc genhtml_branch_coverage=1 00:05:11.494 --rc genhtml_function_coverage=1 00:05:11.494 --rc genhtml_legend=1 00:05:11.494 --rc geninfo_all_blocks=1 00:05:11.494 --rc geninfo_unexecuted_blocks=1 00:05:11.494 00:05:11.494 ' 00:05:11.494 13:18:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.494 --rc genhtml_branch_coverage=1 00:05:11.494 --rc genhtml_function_coverage=1 00:05:11.494 --rc genhtml_legend=1 00:05:11.494 --rc geninfo_all_blocks=1 00:05:11.494 --rc geninfo_unexecuted_blocks=1 00:05:11.494 00:05:11.494 ' 00:05:11.494 13:18:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.494 --rc genhtml_branch_coverage=1 00:05:11.494 --rc genhtml_function_coverage=1 00:05:11.494 --rc genhtml_legend=1 00:05:11.494 --rc geninfo_all_blocks=1 00:05:11.494 --rc geninfo_unexecuted_blocks=1 00:05:11.494 00:05:11.494 ' 00:05:11.494 13:18:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.494 --rc genhtml_branch_coverage=1 00:05:11.494 --rc genhtml_function_coverage=1 00:05:11.494 --rc genhtml_legend=1 00:05:11.495 --rc geninfo_all_blocks=1 00:05:11.495 --rc geninfo_unexecuted_blocks=1 00:05:11.495 00:05:11.495 ' 00:05:11.495 13:18:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.495 13:18:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.495 13:18:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.495 13:18:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.495 13:18:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.495 13:18:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.495 13:18:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.495 ************************************ 00:05:11.495 START TEST default_locks 00:05:11.495 ************************************ 00:05:11.495 13:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:11.495 13:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58845 00:05:11.495 13:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58845 00:05:11.495 13:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58845 ']' 00:05:11.495 13:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.495 13:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.495 13:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.495 13:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.495 13:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.495 13:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.753 [2024-11-26 13:18:00.092953] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:11.753 [2024-11-26 13:18:00.093069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58845 ] 00:05:11.753 [2024-11-26 13:18:00.252522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.011 [2024-11-26 13:18:00.351939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.578 13:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.578 13:18:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:12.578 13:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58845 00:05:12.578 13:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.578 13:18:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58845 00:05:12.578 13:18:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58845 00:05:12.578 13:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58845 ']' 00:05:12.578 13:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58845 00:05:12.837 13:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:12.837 13:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.837 13:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58845 00:05:12.837 13:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.837 13:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.837 killing process with pid 58845 00:05:12.837 13:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58845' 00:05:12.837 13:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58845 00:05:12.837 13:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58845 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58845 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58845 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58845 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58845 ']' 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.211 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58845) - No such process 00:05:14.211 ERROR: process (pid: 58845) is no longer running 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:14.211 00:05:14.211 real 0m2.652s 00:05:14.211 user 0m2.594s 00:05:14.211 sys 0m0.458s 00:05:14.211 ************************************ 00:05:14.211 END TEST default_locks 00:05:14.211 ************************************ 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.211 13:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.211 13:18:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:14.211 13:18:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.211 13:18:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.211 13:18:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.211 ************************************ 00:05:14.211 START TEST default_locks_via_rpc 00:05:14.211 ************************************ 00:05:14.211 13:18:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:14.211 13:18:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58898 00:05:14.211 13:18:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58898 00:05:14.211 13:18:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58898 ']' 00:05:14.211 13:18:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.211 13:18:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.211 13:18:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.211 13:18:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.211 13:18:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.211 13:18:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.469 [2024-11-26 13:18:02.809702] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:14.469 [2024-11-26 13:18:02.809820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58898 ] 00:05:14.469 [2024-11-26 13:18:02.965718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.727 [2024-11-26 13:18:03.066978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58898 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.293 13:18:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58898 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58898 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58898 ']' 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58898 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58898 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.551 killing process with pid 58898 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58898' 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58898 00:05:15.551 13:18:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58898 00:05:16.924 00:05:16.924 real 0m2.568s 00:05:16.924 user 0m2.565s 00:05:16.924 sys 0m0.443s 00:05:16.924 13:18:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.924 ************************************ 00:05:16.924 END TEST default_locks_via_rpc 00:05:16.924 ************************************ 00:05:16.924 13:18:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.924 13:18:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:16.924 13:18:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.924 13:18:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.924 13:18:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.924 ************************************ 00:05:16.924 START TEST non_locking_app_on_locked_coremask 00:05:16.924 ************************************ 00:05:16.924 13:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:16.924 13:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58961 00:05:16.924 13:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58961 /var/tmp/spdk.sock 00:05:16.924 13:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58961 ']' 00:05:16.924 13:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.924 13:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.924 13:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.925 13:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.925 13:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.925 13:18:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.925 [2024-11-26 13:18:05.427555] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:16.925 [2024-11-26 13:18:05.427669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58961 ] 00:05:17.183 [2024-11-26 13:18:05.583873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.183 [2024-11-26 13:18:05.663216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.748 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.749 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:17.749 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58977 00:05:17.749 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:17.749 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58977 /var/tmp/spdk2.sock 00:05:17.749 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58977 ']' 00:05:17.749 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.749 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.749 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.749 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.749 13:18:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.749 [2024-11-26 13:18:06.277858] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:17.749 [2024-11-26 13:18:06.277973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:05:18.007 [2024-11-26 13:18:06.442437] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.007 [2024-11-26 13:18:06.442481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.265 [2024-11-26 13:18:06.609993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.198 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.198 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:19.198 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58961 00:05:19.198 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58961 00:05:19.198 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.456 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58961 00:05:19.456 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58961 ']' 00:05:19.457 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58961 00:05:19.457 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:19.457 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.457 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58961 00:05:19.457 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.457 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.457 killing process with pid 58961 00:05:19.457 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58961' 00:05:19.457 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58961 00:05:19.457 13:18:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58961 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58977 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58977 ']' 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58977 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58977 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.984 killing process with pid 58977 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58977' 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58977 00:05:21.984 13:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58977 00:05:23.359 00:05:23.359 real 0m6.357s 00:05:23.359 user 0m6.593s 00:05:23.359 sys 0m0.787s 00:05:23.359 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.359 13:18:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.359 ************************************ 00:05:23.359 END TEST non_locking_app_on_locked_coremask 00:05:23.359 ************************************ 00:05:23.359 13:18:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:23.359 13:18:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.359 13:18:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.359 13:18:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.359 ************************************ 00:05:23.359 START TEST locking_app_on_unlocked_coremask 00:05:23.359 ************************************ 00:05:23.359 13:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:23.359 13:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59068 00:05:23.359 13:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59068 /var/tmp/spdk.sock 00:05:23.359 13:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59068 ']' 00:05:23.359 13:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.359 13:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.359 13:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.359 13:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.359 13:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.359 13:18:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:23.359 [2024-11-26 13:18:11.826734] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:23.359 [2024-11-26 13:18:11.826821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59068 ] 00:05:23.618 [2024-11-26 13:18:11.976021] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.618 [2024-11-26 13:18:11.976060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.618 [2024-11-26 13:18:12.056854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59084 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59084 /var/tmp/spdk2.sock 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59084 ']' 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.184 13:18:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.184 [2024-11-26 13:18:12.744723] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:24.184 [2024-11-26 13:18:12.744835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59084 ] 00:05:24.442 [2024-11-26 13:18:12.907044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.702 [2024-11-26 13:18:13.067764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.637 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.637 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:25.637 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59084 00:05:25.637 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.637 13:18:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59084 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59068 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59068 ']' 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59068 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59068 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.895 killing process with pid 59068 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59068' 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59068 00:05:25.895 13:18:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59068 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59084 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59084 ']' 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59084 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59084 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.427 killing process with pid 59084 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59084' 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59084 00:05:28.427 13:18:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59084 00:05:29.361 00:05:29.361 real 0m6.077s 00:05:29.361 user 0m6.317s 00:05:29.361 sys 0m0.810s 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.361 ************************************ 00:05:29.361 END TEST locking_app_on_unlocked_coremask 00:05:29.361 ************************************ 00:05:29.361 13:18:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:29.361 13:18:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.361 13:18:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.361 13:18:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.361 ************************************ 00:05:29.361 START TEST locking_app_on_locked_coremask 00:05:29.361 ************************************ 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59175 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59175 /var/tmp/spdk.sock 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59175 ']' 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.361 13:18:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.619 [2024-11-26 13:18:17.949764] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:29.619 [2024-11-26 13:18:17.949873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59175 ] 00:05:29.619 [2024-11-26 13:18:18.106707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.620 [2024-11-26 13:18:18.185339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59191 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59191 /var/tmp/spdk2.sock 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59191 /var/tmp/spdk2.sock 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59191 /var/tmp/spdk2.sock 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59191 ']' 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.554 13:18:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.554 [2024-11-26 13:18:18.856159] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:30.554 [2024-11-26 13:18:18.856270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59191 ] 00:05:30.554 [2024-11-26 13:18:19.021123] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59175 has claimed it. 00:05:30.554 [2024-11-26 13:18:19.021171] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.118 ERROR: process (pid: 59191) is no longer running 00:05:31.118 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59191) - No such process 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59175 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59175 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59175 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59175 ']' 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59175 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.118 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59175 00:05:31.377 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.377 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.377 killing process with pid 59175 00:05:31.377 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59175' 00:05:31.377 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59175 00:05:31.377 13:18:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59175 00:05:32.750 00:05:32.750 real 0m3.005s 00:05:32.750 user 0m3.245s 00:05:32.750 sys 0m0.517s 00:05:32.750 13:18:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.750 ************************************ 00:05:32.750 END TEST locking_app_on_locked_coremask 00:05:32.750 ************************************ 00:05:32.750 13:18:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.750 13:18:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:32.750 13:18:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.750 13:18:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.750 13:18:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.750 ************************************ 00:05:32.750 START TEST locking_overlapped_coremask 00:05:32.750 ************************************ 00:05:32.750 13:18:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:32.750 13:18:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59244 00:05:32.750 13:18:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59244 /var/tmp/spdk.sock 00:05:32.750 13:18:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59244 ']' 00:05:32.750 13:18:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.750 13:18:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.750 13:18:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.750 13:18:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:32.750 13:18:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.750 13:18:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.750 [2024-11-26 13:18:21.018265] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:32.750 [2024-11-26 13:18:21.018382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59244 ] 00:05:32.750 [2024-11-26 13:18:21.176115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.750 [2024-11-26 13:18:21.261398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.750 [2024-11-26 13:18:21.261658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.750 [2024-11-26 13:18:21.261749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.314 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.314 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:33.314 13:18:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59262 00:05:33.314 13:18:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59262 /var/tmp/spdk2.sock 00:05:33.314 13:18:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:33.314 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:33.314 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59262 /var/tmp/spdk2.sock 00:05:33.314 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:33.314 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.315 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:33.315 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:33.315 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59262 /var/tmp/spdk2.sock 00:05:33.315 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59262 ']' 00:05:33.315 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.315 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.315 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.315 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.315 13:18:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 [2024-11-26 13:18:21.918798] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:33.574 [2024-11-26 13:18:21.918913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59262 ] 00:05:33.574 [2024-11-26 13:18:22.092849] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59244 has claimed it. 00:05:33.574 [2024-11-26 13:18:22.092902] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59262) - No such process 00:05:34.190 ERROR: process (pid: 59262) is no longer running 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59244 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59244 ']' 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59244 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59244 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.190 killing process with pid 59244 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59244' 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59244 00:05:34.190 13:18:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59244 00:05:35.562 00:05:35.562 real 0m2.857s 00:05:35.562 user 0m7.807s 00:05:35.562 sys 0m0.394s 00:05:35.562 13:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.562 ************************************ 00:05:35.562 END TEST locking_overlapped_coremask 00:05:35.562 ************************************ 00:05:35.562 13:18:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.562 13:18:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:35.563 13:18:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.563 13:18:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.563 13:18:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.563 ************************************ 00:05:35.563 START TEST locking_overlapped_coremask_via_rpc 00:05:35.563 ************************************ 00:05:35.563 13:18:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:35.563 13:18:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59315 00:05:35.563 13:18:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59315 /var/tmp/spdk.sock 00:05:35.563 13:18:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59315 ']' 00:05:35.563 13:18:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:35.563 13:18:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.563 13:18:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.563 13:18:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.563 13:18:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.563 13:18:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.563 [2024-11-26 13:18:23.945775] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:35.563 [2024-11-26 13:18:23.945888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59315 ] 00:05:35.563 [2024-11-26 13:18:24.103366] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.563 [2024-11-26 13:18:24.103400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.821 [2024-11-26 13:18:24.186000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.821 [2024-11-26 13:18:24.186353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.821 [2024-11-26 13:18:24.186370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59333 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59333 /var/tmp/spdk2.sock 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59333 ']' 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.388 13:18:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.388 [2024-11-26 13:18:24.854220] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:36.388 [2024-11-26 13:18:24.854343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59333 ] 00:05:36.647 [2024-11-26 13:18:25.019603] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.647 [2024-11-26 13:18:25.019645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.647 [2024-11-26 13:18:25.194106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.647 [2024-11-26 13:18:25.194138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.647 [2024-11-26 13:18:25.194160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.093 [2024-11-26 13:18:26.176563] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59315 has claimed it. 00:05:38.093 request: 00:05:38.093 { 00:05:38.093 "method": "framework_enable_cpumask_locks", 00:05:38.093 "req_id": 1 00:05:38.093 } 00:05:38.093 Got JSON-RPC error response 00:05:38.093 response: 00:05:38.093 { 00:05:38.093 "code": -32603, 00:05:38.093 "message": "Failed to claim CPU core: 2" 00:05:38.093 } 00:05:38.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59315 /var/tmp/spdk.sock 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59315 ']' 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.093 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59333 /var/tmp/spdk2.sock 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59333 ']' 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.094 ************************************ 00:05:38.094 END TEST locking_overlapped_coremask_via_rpc 00:05:38.094 ************************************ 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.094 00:05:38.094 real 0m2.743s 00:05:38.094 user 0m1.072s 00:05:38.094 sys 0m0.129s 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.094 13:18:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.094 13:18:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:38.094 13:18:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59315 ]] 00:05:38.094 13:18:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59315 00:05:38.094 13:18:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59315 ']' 00:05:38.094 13:18:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59315 00:05:38.094 13:18:26 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:38.428 13:18:26 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.428 13:18:26 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59315 00:05:38.428 killing process with pid 59315 00:05:38.428 13:18:26 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.428 13:18:26 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.428 13:18:26 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59315' 00:05:38.428 13:18:26 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59315 00:05:38.428 13:18:26 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59315 00:05:39.375 13:18:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59333 ]] 00:05:39.375 13:18:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59333 00:05:39.375 13:18:27 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59333 ']' 00:05:39.375 13:18:27 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59333 00:05:39.375 13:18:27 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:39.375 13:18:27 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.375 13:18:27 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59333 00:05:39.375 killing process with pid 59333 00:05:39.375 13:18:27 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:39.375 13:18:27 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:39.375 13:18:27 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59333' 00:05:39.375 13:18:27 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59333 00:05:39.375 13:18:27 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59333 00:05:40.750 13:18:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:40.750 13:18:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:40.750 13:18:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59315 ]] 00:05:40.750 13:18:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59315 00:05:40.750 Process with pid 59315 is not found 00:05:40.750 13:18:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59315 ']' 00:05:40.750 13:18:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59315 00:05:40.750 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59315) - No such process 00:05:40.750 13:18:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59315 is not found' 00:05:40.750 13:18:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59333 ]] 00:05:40.750 13:18:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59333 00:05:40.750 13:18:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59333 ']' 00:05:40.750 Process with pid 59333 is not found 00:05:40.750 13:18:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59333 00:05:40.750 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59333) - No such process 00:05:40.750 13:18:29 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59333 is not found' 00:05:40.750 13:18:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:40.750 ************************************ 00:05:40.750 END TEST cpu_locks 00:05:40.750 ************************************ 00:05:40.750 00:05:40.750 real 0m29.266s 00:05:40.750 user 0m49.870s 00:05:40.750 sys 0m4.319s 00:05:40.750 13:18:29 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.750 13:18:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.750 ************************************ 00:05:40.750 END TEST event 00:05:40.750 ************************************ 00:05:40.750 00:05:40.750 real 0m56.183s 00:05:40.750 user 1m42.937s 00:05:40.750 sys 0m7.099s 00:05:40.750 13:18:29 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.750 13:18:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.750 13:18:29 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:40.750 13:18:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.750 13:18:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.750 13:18:29 -- common/autotest_common.sh@10 -- # set +x 00:05:40.750 ************************************ 00:05:40.750 START TEST thread 00:05:40.750 ************************************ 00:05:40.750 13:18:29 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:40.750 * Looking for test storage... 00:05:40.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:40.750 13:18:29 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.750 13:18:29 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.750 13:18:29 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.008 13:18:29 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.008 13:18:29 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.009 13:18:29 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.009 13:18:29 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.009 13:18:29 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.009 13:18:29 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.009 13:18:29 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.009 13:18:29 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.009 13:18:29 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.009 13:18:29 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.009 13:18:29 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.009 13:18:29 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.009 13:18:29 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:41.009 13:18:29 thread -- scripts/common.sh@345 -- # : 1 00:05:41.009 13:18:29 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.009 13:18:29 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.009 13:18:29 thread -- scripts/common.sh@365 -- # decimal 1 00:05:41.009 13:18:29 thread -- scripts/common.sh@353 -- # local d=1 00:05:41.009 13:18:29 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.009 13:18:29 thread -- scripts/common.sh@355 -- # echo 1 00:05:41.009 13:18:29 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.009 13:18:29 thread -- scripts/common.sh@366 -- # decimal 2 00:05:41.009 13:18:29 thread -- scripts/common.sh@353 -- # local d=2 00:05:41.009 13:18:29 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.009 13:18:29 thread -- scripts/common.sh@355 -- # echo 2 00:05:41.009 13:18:29 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.009 13:18:29 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.009 13:18:29 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.009 13:18:29 thread -- scripts/common.sh@368 -- # return 0 00:05:41.009 13:18:29 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.009 13:18:29 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.009 --rc genhtml_branch_coverage=1 00:05:41.009 --rc genhtml_function_coverage=1 00:05:41.009 --rc genhtml_legend=1 00:05:41.009 --rc geninfo_all_blocks=1 00:05:41.009 --rc geninfo_unexecuted_blocks=1 00:05:41.009 00:05:41.009 ' 00:05:41.009 13:18:29 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.009 --rc genhtml_branch_coverage=1 00:05:41.009 --rc genhtml_function_coverage=1 00:05:41.009 --rc genhtml_legend=1 00:05:41.009 --rc geninfo_all_blocks=1 00:05:41.009 --rc geninfo_unexecuted_blocks=1 00:05:41.009 00:05:41.009 ' 00:05:41.009 13:18:29 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.009 --rc genhtml_branch_coverage=1 00:05:41.009 --rc genhtml_function_coverage=1 00:05:41.009 --rc genhtml_legend=1 00:05:41.009 --rc geninfo_all_blocks=1 00:05:41.009 --rc geninfo_unexecuted_blocks=1 00:05:41.009 00:05:41.009 ' 00:05:41.009 13:18:29 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.009 --rc genhtml_branch_coverage=1 00:05:41.009 --rc genhtml_function_coverage=1 00:05:41.009 --rc genhtml_legend=1 00:05:41.009 --rc geninfo_all_blocks=1 00:05:41.009 --rc geninfo_unexecuted_blocks=1 00:05:41.009 00:05:41.009 ' 00:05:41.009 13:18:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.009 13:18:29 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:41.009 13:18:29 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.009 13:18:29 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.009 ************************************ 00:05:41.009 START TEST thread_poller_perf 00:05:41.009 ************************************ 00:05:41.009 13:18:29 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.009 [2024-11-26 13:18:29.406518] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:41.009 [2024-11-26 13:18:29.406716] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59482 ] 00:05:41.009 [2024-11-26 13:18:29.562017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.267 [2024-11-26 13:18:29.646399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.267 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:42.202 [2024-11-26T13:18:30.772Z] ====================================== 00:05:42.202 [2024-11-26T13:18:30.772Z] busy:2611677002 (cyc) 00:05:42.202 [2024-11-26T13:18:30.772Z] total_run_count: 389000 00:05:42.202 [2024-11-26T13:18:30.772Z] tsc_hz: 2600000000 (cyc) 00:05:42.202 [2024-11-26T13:18:30.772Z] ====================================== 00:05:42.202 [2024-11-26T13:18:30.773Z] poller_cost: 6713 (cyc), 2581 (nsec) 00:05:42.461 00:05:42.461 real 0m1.402s 00:05:42.461 user 0m1.225s 00:05:42.461 ************************************ 00:05:42.461 END TEST thread_poller_perf 00:05:42.461 ************************************ 00:05:42.461 sys 0m0.070s 00:05:42.461 13:18:30 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.461 13:18:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 13:18:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.461 13:18:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:42.461 13:18:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.461 13:18:30 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 ************************************ 00:05:42.461 START TEST thread_poller_perf 00:05:42.461 ************************************ 00:05:42.461 13:18:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.461 [2024-11-26 13:18:30.867343] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:42.461 [2024-11-26 13:18:30.867466] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59519 ] 00:05:42.461 [2024-11-26 13:18:31.022633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.719 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:42.719 [2024-11-26 13:18:31.103724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.656 [2024-11-26T13:18:32.226Z] ====================================== 00:05:43.656 [2024-11-26T13:18:32.226Z] busy:2602850898 (cyc) 00:05:43.656 [2024-11-26T13:18:32.226Z] total_run_count: 5237000 00:05:43.656 [2024-11-26T13:18:32.226Z] tsc_hz: 2600000000 (cyc) 00:05:43.656 [2024-11-26T13:18:32.226Z] ====================================== 00:05:43.656 [2024-11-26T13:18:32.226Z] poller_cost: 497 (cyc), 191 (nsec) 00:05:43.915 ************************************ 00:05:43.915 00:05:43.915 real 0m1.393s 00:05:43.915 user 0m1.214s 00:05:43.915 sys 0m0.073s 00:05:43.915 13:18:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.915 13:18:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.915 END TEST thread_poller_perf 00:05:43.915 ************************************ 00:05:43.915 13:18:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:43.915 00:05:43.915 real 0m3.032s 00:05:43.915 user 0m2.545s 00:05:43.915 sys 0m0.268s 00:05:43.915 ************************************ 00:05:43.915 END TEST thread 00:05:43.915 ************************************ 00:05:43.915 13:18:32 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.915 13:18:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.915 13:18:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:43.915 13:18:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:43.915 13:18:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.915 13:18:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.915 13:18:32 -- common/autotest_common.sh@10 -- # set +x 00:05:43.915 ************************************ 00:05:43.915 START TEST app_cmdline 00:05:43.915 ************************************ 00:05:43.915 13:18:32 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:43.915 * Looking for test storage... 00:05:43.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:43.915 13:18:32 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.915 13:18:32 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.915 13:18:32 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.915 13:18:32 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.915 13:18:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:43.915 13:18:32 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.915 13:18:32 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.915 --rc genhtml_branch_coverage=1 00:05:43.915 --rc genhtml_function_coverage=1 00:05:43.915 --rc genhtml_legend=1 00:05:43.915 --rc geninfo_all_blocks=1 00:05:43.915 --rc geninfo_unexecuted_blocks=1 00:05:43.915 00:05:43.915 ' 00:05:43.915 13:18:32 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.915 --rc genhtml_branch_coverage=1 00:05:43.915 --rc genhtml_function_coverage=1 00:05:43.915 --rc genhtml_legend=1 00:05:43.915 --rc geninfo_all_blocks=1 00:05:43.915 --rc geninfo_unexecuted_blocks=1 00:05:43.916 00:05:43.916 ' 00:05:43.916 13:18:32 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.916 --rc genhtml_branch_coverage=1 00:05:43.916 --rc genhtml_function_coverage=1 00:05:43.916 --rc genhtml_legend=1 00:05:43.916 --rc geninfo_all_blocks=1 00:05:43.916 --rc geninfo_unexecuted_blocks=1 00:05:43.916 00:05:43.916 ' 00:05:43.916 13:18:32 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.916 --rc genhtml_branch_coverage=1 00:05:43.916 --rc genhtml_function_coverage=1 00:05:43.916 --rc genhtml_legend=1 00:05:43.916 --rc geninfo_all_blocks=1 00:05:43.916 --rc geninfo_unexecuted_blocks=1 00:05:43.916 00:05:43.916 ' 00:05:43.916 13:18:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:43.916 13:18:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59602 00:05:43.916 13:18:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59602 00:05:43.916 13:18:32 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59602 ']' 00:05:43.916 13:18:32 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.916 13:18:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:43.916 13:18:32 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.916 13:18:32 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.916 13:18:32 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.916 13:18:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.174 [2024-11-26 13:18:32.506671] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:44.174 [2024-11-26 13:18:32.506917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59602 ] 00:05:44.174 [2024-11-26 13:18:32.661741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.432 [2024-11-26 13:18:32.745801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.998 13:18:33 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.998 13:18:33 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:44.998 13:18:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:44.998 { 00:05:44.998 "version": "SPDK v25.01-pre git sha1 a9e1e4309", 00:05:44.998 "fields": { 00:05:44.998 "major": 25, 00:05:44.998 "minor": 1, 00:05:44.998 "patch": 0, 00:05:44.998 "suffix": "-pre", 00:05:44.998 "commit": "a9e1e4309" 00:05:44.998 } 00:05:44.998 } 00:05:44.998 13:18:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:44.998 13:18:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:44.999 13:18:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:44.999 13:18:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:44.999 13:18:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:44.999 13:18:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:44.999 13:18:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.999 13:18:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:44.999 13:18:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:44.999 13:18:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:44.999 13:18:33 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.257 request: 00:05:45.257 { 00:05:45.257 "method": "env_dpdk_get_mem_stats", 00:05:45.257 "req_id": 1 00:05:45.257 } 00:05:45.257 Got JSON-RPC error response 00:05:45.257 response: 00:05:45.257 { 00:05:45.257 "code": -32601, 00:05:45.257 "message": "Method not found" 00:05:45.257 } 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.257 13:18:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59602 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59602 ']' 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59602 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59602 00:05:45.257 killing process with pid 59602 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59602' 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@973 -- # kill 59602 00:05:45.257 13:18:33 app_cmdline -- common/autotest_common.sh@978 -- # wait 59602 00:05:46.633 00:05:46.633 real 0m2.673s 00:05:46.633 user 0m2.966s 00:05:46.633 sys 0m0.418s 00:05:46.633 ************************************ 00:05:46.633 END TEST app_cmdline 00:05:46.633 ************************************ 00:05:46.633 13:18:34 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.633 13:18:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.633 13:18:35 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:46.633 13:18:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.633 13:18:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.633 13:18:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.633 ************************************ 00:05:46.633 START TEST version 00:05:46.633 ************************************ 00:05:46.633 13:18:35 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:46.633 * Looking for test storage... 00:05:46.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:46.633 13:18:35 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.633 13:18:35 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.633 13:18:35 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.633 13:18:35 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.633 13:18:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.633 13:18:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.633 13:18:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.633 13:18:35 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.633 13:18:35 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.633 13:18:35 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.633 13:18:35 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.633 13:18:35 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.633 13:18:35 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.633 13:18:35 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.633 13:18:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.633 13:18:35 version -- scripts/common.sh@344 -- # case "$op" in 00:05:46.633 13:18:35 version -- scripts/common.sh@345 -- # : 1 00:05:46.633 13:18:35 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.633 13:18:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.633 13:18:35 version -- scripts/common.sh@365 -- # decimal 1 00:05:46.633 13:18:35 version -- scripts/common.sh@353 -- # local d=1 00:05:46.633 13:18:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.633 13:18:35 version -- scripts/common.sh@355 -- # echo 1 00:05:46.633 13:18:35 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.633 13:18:35 version -- scripts/common.sh@366 -- # decimal 2 00:05:46.633 13:18:35 version -- scripts/common.sh@353 -- # local d=2 00:05:46.633 13:18:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.633 13:18:35 version -- scripts/common.sh@355 -- # echo 2 00:05:46.633 13:18:35 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.633 13:18:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.633 13:18:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.633 13:18:35 version -- scripts/common.sh@368 -- # return 0 00:05:46.633 13:18:35 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.633 13:18:35 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.633 --rc genhtml_branch_coverage=1 00:05:46.633 --rc genhtml_function_coverage=1 00:05:46.633 --rc genhtml_legend=1 00:05:46.633 --rc geninfo_all_blocks=1 00:05:46.633 --rc geninfo_unexecuted_blocks=1 00:05:46.633 00:05:46.633 ' 00:05:46.633 13:18:35 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.633 --rc genhtml_branch_coverage=1 00:05:46.633 --rc genhtml_function_coverage=1 00:05:46.633 --rc genhtml_legend=1 00:05:46.633 --rc geninfo_all_blocks=1 00:05:46.633 --rc geninfo_unexecuted_blocks=1 00:05:46.633 00:05:46.633 ' 00:05:46.633 13:18:35 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.633 --rc genhtml_branch_coverage=1 00:05:46.633 --rc genhtml_function_coverage=1 00:05:46.633 --rc genhtml_legend=1 00:05:46.633 --rc geninfo_all_blocks=1 00:05:46.633 --rc geninfo_unexecuted_blocks=1 00:05:46.633 00:05:46.633 ' 00:05:46.633 13:18:35 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.633 --rc genhtml_branch_coverage=1 00:05:46.633 --rc genhtml_function_coverage=1 00:05:46.633 --rc genhtml_legend=1 00:05:46.633 --rc geninfo_all_blocks=1 00:05:46.633 --rc geninfo_unexecuted_blocks=1 00:05:46.633 00:05:46.633 ' 00:05:46.633 13:18:35 version -- app/version.sh@17 -- # get_header_version major 00:05:46.633 13:18:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:46.633 13:18:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.633 13:18:35 version -- app/version.sh@14 -- # cut -f2 00:05:46.633 13:18:35 version -- app/version.sh@17 -- # major=25 00:05:46.633 13:18:35 version -- app/version.sh@18 -- # get_header_version minor 00:05:46.633 13:18:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:46.633 13:18:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.633 13:18:35 version -- app/version.sh@14 -- # cut -f2 00:05:46.634 13:18:35 version -- app/version.sh@18 -- # minor=1 00:05:46.634 13:18:35 version -- app/version.sh@19 -- # get_header_version patch 00:05:46.634 13:18:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:46.634 13:18:35 version -- app/version.sh@14 -- # cut -f2 00:05:46.634 13:18:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.634 13:18:35 version -- app/version.sh@19 -- # patch=0 00:05:46.634 13:18:35 version -- app/version.sh@20 -- # get_header_version suffix 00:05:46.634 13:18:35 version -- app/version.sh@14 -- # cut -f2 00:05:46.634 13:18:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:46.634 13:18:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.634 13:18:35 version -- app/version.sh@20 -- # suffix=-pre 00:05:46.634 13:18:35 version -- app/version.sh@22 -- # version=25.1 00:05:46.634 13:18:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:46.634 13:18:35 version -- app/version.sh@28 -- # version=25.1rc0 00:05:46.634 13:18:35 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:46.634 13:18:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:46.634 13:18:35 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:46.634 13:18:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:46.634 ************************************ 00:05:46.634 END TEST version 00:05:46.634 ************************************ 00:05:46.634 00:05:46.634 real 0m0.170s 00:05:46.634 user 0m0.119s 00:05:46.634 sys 0m0.078s 00:05:46.634 13:18:35 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.634 13:18:35 version -- common/autotest_common.sh@10 -- # set +x 00:05:46.892 13:18:35 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:46.892 13:18:35 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:46.892 13:18:35 -- spdk/autotest.sh@194 -- # uname -s 00:05:46.892 13:18:35 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:46.892 13:18:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:46.892 13:18:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:46.892 13:18:35 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:46.892 13:18:35 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:46.892 13:18:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:46.892 13:18:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.892 13:18:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.892 ************************************ 00:05:46.892 START TEST blockdev_nvme 00:05:46.892 ************************************ 00:05:46.892 13:18:35 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:46.892 * Looking for test storage... 00:05:46.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:46.892 13:18:35 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.892 13:18:35 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.892 13:18:35 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.892 13:18:35 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:46.892 13:18:35 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:46.893 13:18:35 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.893 13:18:35 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:46.893 13:18:35 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.893 13:18:35 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.893 13:18:35 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.893 13:18:35 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.893 --rc genhtml_branch_coverage=1 00:05:46.893 --rc genhtml_function_coverage=1 00:05:46.893 --rc genhtml_legend=1 00:05:46.893 --rc geninfo_all_blocks=1 00:05:46.893 --rc geninfo_unexecuted_blocks=1 00:05:46.893 00:05:46.893 ' 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.893 --rc genhtml_branch_coverage=1 00:05:46.893 --rc genhtml_function_coverage=1 00:05:46.893 --rc genhtml_legend=1 00:05:46.893 --rc geninfo_all_blocks=1 00:05:46.893 --rc geninfo_unexecuted_blocks=1 00:05:46.893 00:05:46.893 ' 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.893 --rc genhtml_branch_coverage=1 00:05:46.893 --rc genhtml_function_coverage=1 00:05:46.893 --rc genhtml_legend=1 00:05:46.893 --rc geninfo_all_blocks=1 00:05:46.893 --rc geninfo_unexecuted_blocks=1 00:05:46.893 00:05:46.893 ' 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.893 --rc genhtml_branch_coverage=1 00:05:46.893 --rc genhtml_function_coverage=1 00:05:46.893 --rc genhtml_legend=1 00:05:46.893 --rc geninfo_all_blocks=1 00:05:46.893 --rc geninfo_unexecuted_blocks=1 00:05:46.893 00:05:46.893 ' 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:46.893 13:18:35 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59774 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59774 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59774 ']' 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.893 13:18:35 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.893 13:18:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:46.893 [2024-11-26 13:18:35.442210] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:46.893 [2024-11-26 13:18:35.442487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59774 ] 00:05:47.151 [2024-11-26 13:18:35.596757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.151 [2024-11-26 13:18:35.677613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.717 13:18:36 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.717 13:18:36 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:47.717 13:18:36 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:47.717 13:18:36 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:47.717 13:18:36 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:47.717 13:18:36 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:47.717 13:18:36 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.717 13:18:36 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:47.717 13:18:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.717 13:18:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.285 13:18:36 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.285 13:18:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:48.285 13:18:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.285 13:18:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.285 13:18:36 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.285 13:18:36 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:48.285 13:18:36 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.285 13:18:36 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:48.285 13:18:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.285 13:18:36 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:48.285 13:18:36 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:48.286 13:18:36 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "318b8d1b-3d8c-49f0-946e-885f1aefbe3c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "318b8d1b-3d8c-49f0-946e-885f1aefbe3c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "a2ca8c0b-3b18-47b2-a98c-fa5b150abed7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a2ca8c0b-3b18-47b2-a98c-fa5b150abed7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ace6047d-387c-4c2c-b93b-243108bd9bc7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ace6047d-387c-4c2c-b93b-243108bd9bc7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f09ca9be-2de7-41ac-91f6-796fc5a8cb7b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f09ca9be-2de7-41ac-91f6-796fc5a8cb7b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "52ceffb6-2c28-482a-a1b2-4e37639b1c38"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "52ceffb6-2c28-482a-a1b2-4e37639b1c38",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "cbc7642e-33ae-4ce6-b5fa-90648d07a747"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cbc7642e-33ae-4ce6-b5fa-90648d07a747",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:48.286 13:18:36 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:48.286 13:18:36 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:48.286 13:18:36 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:48.286 13:18:36 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59774 00:05:48.286 13:18:36 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59774 ']' 00:05:48.286 13:18:36 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59774 00:05:48.286 13:18:36 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:48.286 13:18:36 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.286 13:18:36 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59774 00:05:48.286 13:18:36 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.286 13:18:36 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.286 killing process with pid 59774 00:05:48.286 13:18:36 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59774' 00:05:48.286 13:18:36 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59774 00:05:48.286 13:18:36 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59774 00:05:49.660 13:18:38 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:49.660 13:18:38 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:49.660 13:18:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:49.660 13:18:38 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.660 13:18:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.919 ************************************ 00:05:49.919 START TEST bdev_hello_world 00:05:49.919 ************************************ 00:05:49.919 13:18:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:49.919 [2024-11-26 13:18:38.283198] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:49.919 [2024-11-26 13:18:38.283311] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59857 ] 00:05:49.919 [2024-11-26 13:18:38.443468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.177 [2024-11-26 13:18:38.540559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.745 [2024-11-26 13:18:39.073556] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:50.745 [2024-11-26 13:18:39.073595] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:50.745 [2024-11-26 13:18:39.073612] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:50.745 [2024-11-26 13:18:39.076064] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:50.745 [2024-11-26 13:18:39.076528] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:50.745 [2024-11-26 13:18:39.076552] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:50.745 [2024-11-26 13:18:39.076769] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:50.745 00:05:50.745 [2024-11-26 13:18:39.076791] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:51.312 00:05:51.312 real 0m1.530s 00:05:51.312 user 0m1.256s 00:05:51.312 sys 0m0.168s 00:05:51.312 13:18:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.312 ************************************ 00:05:51.312 END TEST bdev_hello_world 00:05:51.312 ************************************ 00:05:51.312 13:18:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:51.312 13:18:39 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:51.312 13:18:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:51.312 13:18:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.312 13:18:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:51.312 ************************************ 00:05:51.312 START TEST bdev_bounds 00:05:51.312 ************************************ 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59895 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59895' 00:05:51.312 Process bdevio pid: 59895 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59895 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59895 ']' 00:05:51.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.312 13:18:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:51.313 [2024-11-26 13:18:39.845732] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:51.313 [2024-11-26 13:18:39.845818] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59895 ] 00:05:51.571 [2024-11-26 13:18:39.995776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.571 [2024-11-26 13:18:40.084260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.571 [2024-11-26 13:18:40.084623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.571 [2024-11-26 13:18:40.084637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.506 13:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.506 13:18:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:52.506 13:18:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:52.506 I/O targets: 00:05:52.506 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:52.506 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:52.506 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:52.506 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:52.506 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:52.506 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:52.506 00:05:52.506 00:05:52.506 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.506 http://cunit.sourceforge.net/ 00:05:52.506 00:05:52.506 00:05:52.506 Suite: bdevio tests on: Nvme3n1 00:05:52.506 Test: blockdev write read block ...passed 00:05:52.506 Test: blockdev write zeroes read block ...passed 00:05:52.506 Test: blockdev write zeroes read no split ...passed 00:05:52.506 Test: blockdev write zeroes read split ...passed 00:05:52.506 Test: blockdev write zeroes read split partial ...passed 00:05:52.506 Test: blockdev reset ...[2024-11-26 13:18:40.855153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:52.506 [2024-11-26 13:18:40.858041] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:52.506 passed 00:05:52.506 Test: blockdev write read 8 blocks ...passed 00:05:52.506 Test: blockdev write read size > 128k ...passed 00:05:52.506 Test: blockdev write read invalid size ...passed 00:05:52.506 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:52.507 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:52.507 Test: blockdev write read max offset ...passed 00:05:52.507 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:52.507 Test: blockdev writev readv 8 blocks ...passed 00:05:52.507 Test: blockdev writev readv 30 x 1block ...passed 00:05:52.507 Test: blockdev writev readv block ...passed 00:05:52.507 Test: blockdev writev readv size > 128k ...passed 00:05:52.507 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:52.507 Test: blockdev comparev and writev ...[2024-11-26 13:18:40.863644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3c0a000 len:0x1000 00:05:52.507 [2024-11-26 13:18:40.863759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:52.507 passed 00:05:52.507 Test: blockdev nvme passthru rw ...passed 00:05:52.507 Test: blockdev nvme passthru vendor specific ...[2024-11-26 13:18:40.864413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:52.507 [2024-11-26 13:18:40.864535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:05:52.507 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:05:52.507 passed 00:05:52.507 Test: blockdev copy ...passed 00:05:52.507 Suite: bdevio tests on: Nvme2n3 00:05:52.507 Test: blockdev write read block ...passed 00:05:52.507 Test: blockdev write zeroes read block ...passed 00:05:52.507 Test: blockdev write zeroes read no split ...passed 00:05:52.507 Test: blockdev write zeroes read split ...passed 00:05:52.507 Test: blockdev write zeroes read split partial ...passed 00:05:52.507 Test: blockdev reset ...[2024-11-26 13:18:40.906433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:52.507 [2024-11-26 13:18:40.909141] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:52.507 passed 00:05:52.507 Test: blockdev write read 8 blocks ...passed 00:05:52.507 Test: blockdev write read size > 128k ...passed 00:05:52.507 Test: blockdev write read invalid size ...passed 00:05:52.507 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:52.507 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:52.507 Test: blockdev write read max offset ...passed 00:05:52.507 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:52.507 Test: blockdev writev readv 8 blocks ...passed 00:05:52.507 Test: blockdev writev readv 30 x 1block ...passed 00:05:52.507 Test: blockdev writev readv block ...passed 00:05:52.507 Test: blockdev writev readv size > 128k ...passed 00:05:52.507 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:52.507 Test: blockdev comparev and writev ...[2024-11-26 13:18:40.914312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28fc06000 len:0x1000 00:05:52.507 [2024-11-26 13:18:40.914416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:52.507 passed 00:05:52.507 Test: blockdev nvme passthru rw ...passed 00:05:52.507 Test: blockdev nvme passthru vendor specific ...[2024-11-26 13:18:40.915058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:52.507 [2024-11-26 13:18:40.915138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:05:52.507 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:05:52.507 passed 00:05:52.507 Test: blockdev copy ...passed 00:05:52.507 Suite: bdevio tests on: Nvme2n2 00:05:52.507 Test: blockdev write read block ...passed 00:05:52.507 Test: blockdev write zeroes read block ...passed 00:05:52.507 Test: blockdev write zeroes read no split ...passed 00:05:52.507 Test: blockdev write zeroes read split ...passed 00:05:52.507 Test: blockdev write zeroes read split partial ...passed 00:05:52.507 Test: blockdev reset ...[2024-11-26 13:18:40.955293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:52.507 [2024-11-26 13:18:40.957913] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:52.507 passed 00:05:52.507 Test: blockdev write read 8 blocks ...passed 00:05:52.507 Test: blockdev write read size > 128k ...passed 00:05:52.507 Test: blockdev write read invalid size ...passed 00:05:52.507 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:52.507 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:52.507 Test: blockdev write read max offset ...passed 00:05:52.507 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:52.507 Test: blockdev writev readv 8 blocks ...passed 00:05:52.507 Test: blockdev writev readv 30 x 1block ...passed 00:05:52.507 Test: blockdev writev readv block ...passed 00:05:52.507 Test: blockdev writev readv size > 128k ...passed 00:05:52.507 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:52.507 Test: blockdev comparev and writev ...[2024-11-26 13:18:40.963409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0c3c000 len:0x1000 00:05:52.507 [2024-11-26 13:18:40.963456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:52.507 passed 00:05:52.507 Test: blockdev nvme passthru rw ...passed 00:05:52.507 Test: blockdev nvme passthru vendor specific ...passed 00:05:52.507 Test: blockdev nvme admin passthru ...[2024-11-26 13:18:40.963979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:52.507 [2024-11-26 13:18:40.964002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:52.507 passed 00:05:52.507 Test: blockdev copy ...passed 00:05:52.507 Suite: bdevio tests on: Nvme2n1 00:05:52.507 Test: blockdev write read block ...passed 00:05:52.507 Test: blockdev write zeroes read block ...passed 00:05:52.507 Test: blockdev write zeroes read no split ...passed 00:05:52.507 Test: blockdev write zeroes read split ...passed 00:05:52.507 Test: blockdev write zeroes read split partial ...passed 00:05:52.507 Test: blockdev reset ...[2024-11-26 13:18:41.001634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:52.507 [2024-11-26 13:18:41.004121] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:52.507 passed 00:05:52.507 Test: blockdev write read 8 blocks ...passed 00:05:52.507 Test: blockdev write read size > 128k ...passed 00:05:52.507 Test: blockdev write read invalid size ...passed 00:05:52.507 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:52.507 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:52.507 Test: blockdev write read max offset ...passed 00:05:52.507 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:52.507 Test: blockdev writev readv 8 blocks ...passed 00:05:52.507 Test: blockdev writev readv 30 x 1block ...passed 00:05:52.507 Test: blockdev writev readv block ...passed 00:05:52.507 Test: blockdev writev readv size > 128k ...passed 00:05:52.507 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:52.507 Test: blockdev comparev and writev ...[2024-11-26 13:18:41.009683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0c38000 len:0x1000 00:05:52.508 [2024-11-26 13:18:41.009723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:52.508 passed 00:05:52.508 Test: blockdev nvme passthru rw ...passed 00:05:52.508 Test: blockdev nvme passthru vendor specific ...passed 00:05:52.508 Test: blockdev nvme admin passthru ...[2024-11-26 13:18:41.010226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:52.508 [2024-11-26 13:18:41.010248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:52.508 passed 00:05:52.508 Test: blockdev copy ...passed 00:05:52.508 Suite: bdevio tests on: Nvme1n1 00:05:52.508 Test: blockdev write read block ...passed 00:05:52.508 Test: blockdev write zeroes read block ...passed 00:05:52.508 Test: blockdev write zeroes read no split ...passed 00:05:52.508 Test: blockdev write zeroes read split ...passed 00:05:52.508 Test: blockdev write zeroes read split partial ...passed 00:05:52.508 Test: blockdev reset ...[2024-11-26 13:18:41.049525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:52.508 [2024-11-26 13:18:41.051923] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:05:52.508 passed 00:05:52.508 Test: blockdev write read 8 blocks ...passed 00:05:52.508 Test: blockdev write read size > 128k ...passed 00:05:52.508 Test: blockdev write read invalid size ...passed 00:05:52.508 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:52.508 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:52.508 Test: blockdev write read max offset ...passed 00:05:52.508 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:52.508 Test: blockdev writev readv 8 blocks ...passed 00:05:52.508 Test: blockdev writev readv 30 x 1block ...passed 00:05:52.508 Test: blockdev writev readv block ...passed 00:05:52.508 Test: blockdev writev readv size > 128k ...passed 00:05:52.508 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:52.508 Test: blockdev comparev and writev ...[2024-11-26 13:18:41.057190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0c34000 len:0x1000 00:05:52.508 [2024-11-26 13:18:41.057230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:52.508 passed 00:05:52.508 Test: blockdev nvme passthru rw ...passed 00:05:52.508 Test: blockdev nvme passthru vendor specific ...passed 00:05:52.508 Test: blockdev nvme admin passthru ...[2024-11-26 13:18:41.057791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:52.508 [2024-11-26 13:18:41.057817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:52.508 passed 00:05:52.508 Test: blockdev copy ...passed 00:05:52.508 Suite: bdevio tests on: Nvme0n1 00:05:52.508 Test: blockdev write read block ...passed 00:05:52.508 Test: blockdev write zeroes read block ...passed 00:05:52.508 Test: blockdev write zeroes read no split ...passed 00:05:52.768 Test: blockdev write zeroes read split ...passed 00:05:52.768 Test: blockdev write zeroes read split partial ...passed 00:05:52.768 Test: blockdev reset ...[2024-11-26 13:18:41.097240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:52.769 [2024-11-26 13:18:41.099652] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:52.769 passed 00:05:52.769 Test: blockdev write read 8 blocks ...passed 00:05:52.769 Test: blockdev write read size > 128k ...passed 00:05:52.769 Test: blockdev write read invalid size ...passed 00:05:52.769 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:52.769 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:52.769 Test: blockdev write read max offset ...passed 00:05:52.769 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:52.769 Test: blockdev writev readv 8 blocks ...passed 00:05:52.769 Test: blockdev writev readv 30 x 1block ...passed 00:05:52.769 Test: blockdev writev readv block ...passed 00:05:52.769 Test: blockdev writev readv size > 128k ...passed 00:05:52.769 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:52.769 Test: blockdev comparev and writev ...passed 00:05:52.769 Test: blockdev nvme passthru rw ...[2024-11-26 13:18:41.104798] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:52.769 separate metadata which is not supported yet. 00:05:52.769 passed 00:05:52.769 Test: blockdev nvme passthru vendor specific ...passed 00:05:52.769 Test: blockdev nvme admin passthru ...[2024-11-26 13:18:41.105197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:52.769 [2024-11-26 13:18:41.105230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:52.769 passed 00:05:52.769 Test: blockdev copy ...passed 00:05:52.769 00:05:52.769 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.769 suites 6 6 n/a 0 0 00:05:52.769 tests 138 138 138 0 0 00:05:52.769 asserts 893 893 893 0 n/a 00:05:52.769 00:05:52.769 Elapsed time = 0.776 seconds 00:05:52.769 0 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59895 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59895 ']' 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59895 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59895 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.769 killing process with pid 59895 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59895' 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59895 00:05:52.769 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59895 00:05:53.336 13:18:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:53.336 00:05:53.336 real 0m1.868s 00:05:53.336 user 0m4.876s 00:05:53.336 sys 0m0.256s 00:05:53.336 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.336 13:18:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:53.336 ************************************ 00:05:53.336 END TEST bdev_bounds 00:05:53.336 ************************************ 00:05:53.336 13:18:41 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:53.336 13:18:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:53.336 13:18:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.336 13:18:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.336 ************************************ 00:05:53.336 START TEST bdev_nbd 00:05:53.336 ************************************ 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:53.336 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59944 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59944 /var/tmp/spdk-nbd.sock 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59944 ']' 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:53.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.337 13:18:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:53.337 [2024-11-26 13:18:41.769070] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:05:53.337 [2024-11-26 13:18:41.769193] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.595 [2024-11-26 13:18:41.926478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.595 [2024-11-26 13:18:42.007394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:54.161 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:54.420 1+0 records in 00:05:54.420 1+0 records out 00:05:54.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246897 s, 16.6 MB/s 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:54.420 1+0 records in 00:05:54.420 1+0 records out 00:05:54.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024844 s, 16.5 MB/s 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:54.420 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.679 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.679 13:18:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:54.679 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:54.679 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:54.679 13:18:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:54.679 1+0 records in 00:05:54.679 1+0 records out 00:05:54.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362585 s, 11.3 MB/s 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:54.679 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:54.937 1+0 records in 00:05:54.937 1+0 records out 00:05:54.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495831 s, 8.3 MB/s 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:54.937 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:55.197 1+0 records in 00:05:55.197 1+0 records out 00:05:55.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545793 s, 7.5 MB/s 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:55.197 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:55.455 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:55.455 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:55.456 1+0 records in 00:05:55.456 1+0 records out 00:05:55.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394046 s, 10.4 MB/s 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:55.456 13:18:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.714 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:55.714 { 00:05:55.714 "nbd_device": "/dev/nbd0", 00:05:55.714 "bdev_name": "Nvme0n1" 00:05:55.714 }, 00:05:55.714 { 00:05:55.714 "nbd_device": "/dev/nbd1", 00:05:55.714 "bdev_name": "Nvme1n1" 00:05:55.714 }, 00:05:55.714 { 00:05:55.714 "nbd_device": "/dev/nbd2", 00:05:55.714 "bdev_name": "Nvme2n1" 00:05:55.714 }, 00:05:55.714 { 00:05:55.714 "nbd_device": "/dev/nbd3", 00:05:55.714 "bdev_name": "Nvme2n2" 00:05:55.714 }, 00:05:55.714 { 00:05:55.714 "nbd_device": "/dev/nbd4", 00:05:55.714 "bdev_name": "Nvme2n3" 00:05:55.714 }, 00:05:55.714 { 00:05:55.714 "nbd_device": "/dev/nbd5", 00:05:55.714 "bdev_name": "Nvme3n1" 00:05:55.714 } 00:05:55.714 ]' 00:05:55.714 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:55.714 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:55.714 { 00:05:55.714 "nbd_device": "/dev/nbd0", 00:05:55.714 "bdev_name": "Nvme0n1" 00:05:55.714 }, 00:05:55.714 { 00:05:55.714 "nbd_device": "/dev/nbd1", 00:05:55.714 "bdev_name": "Nvme1n1" 00:05:55.714 }, 00:05:55.714 { 00:05:55.714 "nbd_device": "/dev/nbd2", 00:05:55.714 "bdev_name": "Nvme2n1" 00:05:55.714 }, 00:05:55.714 { 00:05:55.714 "nbd_device": "/dev/nbd3", 00:05:55.715 "bdev_name": "Nvme2n2" 00:05:55.715 }, 00:05:55.715 { 00:05:55.715 "nbd_device": "/dev/nbd4", 00:05:55.715 "bdev_name": "Nvme2n3" 00:05:55.715 }, 00:05:55.715 { 00:05:55.715 "nbd_device": "/dev/nbd5", 00:05:55.715 "bdev_name": "Nvme3n1" 00:05:55.715 } 00:05:55.715 ]' 00:05:55.715 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:55.715 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:55.715 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.715 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:55.715 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.715 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:55.715 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.715 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.973 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.973 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.973 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.973 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.973 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.973 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.973 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:55.973 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.973 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.973 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.231 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:56.489 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:56.489 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:56.489 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:56.489 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.489 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.489 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:56.489 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:56.489 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.490 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.490 13:18:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:56.749 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:56.749 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:56.749 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:56.749 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.749 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.749 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:56.749 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:56.749 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.749 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.749 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.008 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:57.266 /dev/nbd0 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.266 1+0 records in 00:05:57.266 1+0 records out 00:05:57.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000888418 s, 4.6 MB/s 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:57.266 13:18:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:57.525 /dev/nbd1 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.525 1+0 records in 00:05:57.525 1+0 records out 00:05:57.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668968 s, 6.1 MB/s 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:57.525 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:57.787 /dev/nbd10 00:05:57.787 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:57.787 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:57.787 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:05:57.787 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.788 1+0 records in 00:05:57.788 1+0 records out 00:05:57.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492256 s, 8.3 MB/s 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:57.788 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:58.046 /dev/nbd11 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.046 1+0 records in 00:05:58.046 1+0 records out 00:05:58.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628642 s, 6.5 MB/s 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:58.046 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:58.304 /dev/nbd12 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.304 1+0 records in 00:05:58.304 1+0 records out 00:05:58.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292625 s, 14.0 MB/s 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.304 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:58.305 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.305 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:58.305 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:58.564 /dev/nbd13 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.564 1+0 records in 00:05:58.564 1+0 records out 00:05:58.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423213 s, 9.7 MB/s 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.564 13:18:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.822 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.822 { 00:05:58.822 "nbd_device": "/dev/nbd0", 00:05:58.822 "bdev_name": "Nvme0n1" 00:05:58.822 }, 00:05:58.822 { 00:05:58.822 "nbd_device": "/dev/nbd1", 00:05:58.822 "bdev_name": "Nvme1n1" 00:05:58.822 }, 00:05:58.822 { 00:05:58.823 "nbd_device": "/dev/nbd10", 00:05:58.823 "bdev_name": "Nvme2n1" 00:05:58.823 }, 00:05:58.823 { 00:05:58.823 "nbd_device": "/dev/nbd11", 00:05:58.823 "bdev_name": "Nvme2n2" 00:05:58.823 }, 00:05:58.823 { 00:05:58.823 "nbd_device": "/dev/nbd12", 00:05:58.823 "bdev_name": "Nvme2n3" 00:05:58.823 }, 00:05:58.823 { 00:05:58.823 "nbd_device": "/dev/nbd13", 00:05:58.823 "bdev_name": "Nvme3n1" 00:05:58.823 } 00:05:58.823 ]' 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.823 { 00:05:58.823 "nbd_device": "/dev/nbd0", 00:05:58.823 "bdev_name": "Nvme0n1" 00:05:58.823 }, 00:05:58.823 { 00:05:58.823 "nbd_device": "/dev/nbd1", 00:05:58.823 "bdev_name": "Nvme1n1" 00:05:58.823 }, 00:05:58.823 { 00:05:58.823 "nbd_device": "/dev/nbd10", 00:05:58.823 "bdev_name": "Nvme2n1" 00:05:58.823 }, 00:05:58.823 { 00:05:58.823 "nbd_device": "/dev/nbd11", 00:05:58.823 "bdev_name": "Nvme2n2" 00:05:58.823 }, 00:05:58.823 { 00:05:58.823 "nbd_device": "/dev/nbd12", 00:05:58.823 "bdev_name": "Nvme2n3" 00:05:58.823 }, 00:05:58.823 { 00:05:58.823 "nbd_device": "/dev/nbd13", 00:05:58.823 "bdev_name": "Nvme3n1" 00:05:58.823 } 00:05:58.823 ]' 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.823 /dev/nbd1 00:05:58.823 /dev/nbd10 00:05:58.823 /dev/nbd11 00:05:58.823 /dev/nbd12 00:05:58.823 /dev/nbd13' 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.823 /dev/nbd1 00:05:58.823 /dev/nbd10 00:05:58.823 /dev/nbd11 00:05:58.823 /dev/nbd12 00:05:58.823 /dev/nbd13' 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:58.823 256+0 records in 00:05:58.823 256+0 records out 00:05:58.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422343 s, 248 MB/s 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.823 256+0 records in 00:05:58.823 256+0 records out 00:05:58.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0549693 s, 19.1 MB/s 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.823 256+0 records in 00:05:58.823 256+0 records out 00:05:58.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0629054 s, 16.7 MB/s 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.823 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:59.082 256+0 records in 00:05:59.082 256+0 records out 00:05:59.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0646418 s, 16.2 MB/s 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:59.082 256+0 records in 00:05:59.082 256+0 records out 00:05:59.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0617661 s, 17.0 MB/s 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:59.082 256+0 records in 00:05:59.082 256+0 records out 00:05:59.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0662415 s, 15.8 MB/s 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:59.082 256+0 records in 00:05:59.082 256+0 records out 00:05:59.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0643124 s, 16.3 MB/s 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.082 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.341 13:18:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.599 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.599 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.599 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.599 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.599 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.599 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.599 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.599 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.599 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.599 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:59.858 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:59.858 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:59.858 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:59.858 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.858 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.858 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:59.858 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:59.858 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.858 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.858 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:00.116 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:00.116 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:00.116 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:00.116 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.116 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.116 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:00.116 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.116 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.116 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.116 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:00.374 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:00.375 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.375 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.375 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.375 13:18:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:00.634 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:00.891 malloc_lvol_verify 00:06:00.891 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:01.148 a3d1228b-7a0f-47ae-91e0-eb1b0fc921f4 00:06:01.148 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:01.406 5ebcc99a-5dc3-44f2-b800-cb11bae645bf 00:06:01.406 13:18:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:01.664 /dev/nbd0 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:01.664 mke2fs 1.47.0 (5-Feb-2023) 00:06:01.664 Discarding device blocks: 0/4096 done 00:06:01.664 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:01.664 00:06:01.664 Allocating group tables: 0/1 done 00:06:01.664 Writing inode tables: 0/1 done 00:06:01.664 Creating journal (1024 blocks): done 00:06:01.664 Writing superblocks and filesystem accounting information: 0/1 done 00:06:01.664 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.664 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59944 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59944 ']' 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59944 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59944 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.921 killing process with pid 59944 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59944' 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59944 00:06:01.921 13:18:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59944 00:06:02.858 13:18:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:02.858 00:06:02.858 real 0m9.382s 00:06:02.858 user 0m13.510s 00:06:02.858 sys 0m2.968s 00:06:02.858 13:18:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.858 13:18:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:02.858 ************************************ 00:06:02.858 END TEST bdev_nbd 00:06:02.858 ************************************ 00:06:02.858 13:18:51 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:02.858 13:18:51 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:02.858 skipping fio tests on NVMe due to multi-ns failures. 00:06:02.858 13:18:51 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:02.858 13:18:51 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:02.858 13:18:51 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:02.858 13:18:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:02.858 13:18:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.858 13:18:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.858 ************************************ 00:06:02.858 START TEST bdev_verify 00:06:02.858 ************************************ 00:06:02.858 13:18:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:02.858 [2024-11-26 13:18:51.192351] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:02.858 [2024-11-26 13:18:51.192478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60319 ] 00:06:02.858 [2024-11-26 13:18:51.352758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.116 [2024-11-26 13:18:51.450531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.116 [2024-11-26 13:18:51.450716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.680 Running I/O for 5 seconds... 00:06:05.981 21696.00 IOPS, 84.75 MiB/s [2024-11-26T13:18:55.481Z] 21632.00 IOPS, 84.50 MiB/s [2024-11-26T13:18:56.413Z] 22293.33 IOPS, 87.08 MiB/s [2024-11-26T13:18:57.347Z] 22176.00 IOPS, 86.62 MiB/s [2024-11-26T13:18:57.347Z] 22195.20 IOPS, 86.70 MiB/s 00:06:08.778 Latency(us) 00:06:08.778 [2024-11-26T13:18:57.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:08.778 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0x0 length 0xbd0bd 00:06:08.778 Nvme0n1 : 5.06 1844.85 7.21 0.00 0.00 69207.07 10132.87 74206.92 00:06:08.778 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:08.778 Nvme0n1 : 5.07 1819.22 7.11 0.00 0.00 69664.86 14821.22 64527.75 00:06:08.778 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0x0 length 0xa0000 00:06:08.778 Nvme1n1 : 5.07 1844.17 7.20 0.00 0.00 69076.32 12804.73 64931.05 00:06:08.778 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0xa0000 length 0xa0000 00:06:08.778 Nvme1n1 : 5.07 1817.65 7.10 0.00 0.00 69563.65 7662.67 68157.44 00:06:08.778 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0x0 length 0x80000 00:06:08.778 Nvme2n1 : 5.07 1843.39 7.20 0.00 0.00 68932.41 13913.80 63317.86 00:06:08.778 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0x80000 length 0x80000 00:06:08.778 Nvme2n1 : 5.06 1821.47 7.12 0.00 0.00 70091.04 12804.73 69367.34 00:06:08.778 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0x0 length 0x80000 00:06:08.778 Nvme2n2 : 5.07 1842.97 7.20 0.00 0.00 68788.92 13409.67 61704.66 00:06:08.778 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0x80000 length 0x80000 00:06:08.778 Nvme2n2 : 5.06 1820.82 7.11 0.00 0.00 70005.71 15022.87 65334.35 00:06:08.778 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0x0 length 0x80000 00:06:08.778 Nvme2n3 : 5.07 1841.91 7.19 0.00 0.00 68659.10 12048.54 62914.56 00:06:08.778 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0x80000 length 0x80000 00:06:08.778 Nvme2n3 : 5.06 1820.33 7.11 0.00 0.00 69891.57 16736.89 62107.96 00:06:08.778 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0x0 length 0x20000 00:06:08.778 Nvme3n1 : 5.08 1852.08 7.23 0.00 0.00 68192.05 3012.14 66544.25 00:06:08.778 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:08.778 Verification LBA range: start 0x20000 length 0x20000 00:06:08.778 Nvme3n1 : 5.06 1819.83 7.11 0.00 0.00 69767.65 16636.06 61704.66 00:06:08.778 [2024-11-26T13:18:57.348Z] =================================================================================================================== 00:06:08.778 [2024-11-26T13:18:57.348Z] Total : 21988.69 85.89 0.00 0.00 69315.86 3012.14 74206.92 00:06:12.963 00:06:12.963 real 0m10.344s 00:06:12.963 user 0m19.745s 00:06:12.963 sys 0m0.242s 00:06:12.963 13:19:01 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.963 ************************************ 00:06:12.963 END TEST bdev_verify 00:06:12.963 ************************************ 00:06:12.963 13:19:01 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:12.963 13:19:01 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:12.963 13:19:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:12.963 13:19:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.963 13:19:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.963 ************************************ 00:06:12.963 START TEST bdev_verify_big_io 00:06:12.963 ************************************ 00:06:12.963 13:19:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:13.221 [2024-11-26 13:19:01.573945] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:13.221 [2024-11-26 13:19:01.574061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60422 ] 00:06:13.221 [2024-11-26 13:19:01.732507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.479 [2024-11-26 13:19:01.834231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.479 [2024-11-26 13:19:01.834334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.045 Running I/O for 5 seconds... 00:06:17.847 272.00 IOPS, 17.00 MiB/s [2024-11-26T13:19:08.318Z] 1407.50 IOPS, 87.97 MiB/s [2024-11-26T13:19:08.578Z] 2435.00 IOPS, 152.19 MiB/s 00:06:20.008 Latency(us) 00:06:20.008 [2024-11-26T13:19:08.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:20.008 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0x0 length 0xbd0b 00:06:20.008 Nvme0n1 : 5.59 138.51 8.66 0.00 0.00 869429.51 9225.45 1200216.22 00:06:20.008 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:20.008 Nvme0n1 : 5.73 139.68 8.73 0.00 0.00 879567.54 19761.62 1187310.67 00:06:20.008 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0x0 length 0xa000 00:06:20.008 Nvme1n1 : 5.84 135.93 8.50 0.00 0.00 859926.54 90338.86 1303460.63 00:06:20.008 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0xa000 length 0xa000 00:06:20.008 Nvme1n1 : 5.65 140.03 8.75 0.00 0.00 838229.59 79449.80 987274.63 00:06:20.008 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0x0 length 0x8000 00:06:20.008 Nvme2n1 : 5.94 138.15 8.63 0.00 0.00 818678.19 100824.62 1322818.95 00:06:20.008 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0x8000 length 0x8000 00:06:20.008 Nvme2n1 : 5.85 149.68 9.36 0.00 0.00 766966.63 46177.67 784012.21 00:06:20.008 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0x0 length 0x8000 00:06:20.008 Nvme2n2 : 5.98 146.95 9.18 0.00 0.00 757695.95 36095.21 1348630.06 00:06:20.008 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0x8000 length 0x8000 00:06:20.008 Nvme2n2 : 5.85 153.23 9.58 0.00 0.00 726441.35 70173.93 696899.74 00:06:20.008 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0x0 length 0x8000 00:06:20.008 Nvme2n3 : 6.04 153.26 9.58 0.00 0.00 701481.21 37305.11 1367988.38 00:06:20.008 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0x8000 length 0x8000 00:06:20.008 Nvme2n3 : 6.02 166.77 10.42 0.00 0.00 647363.87 18652.55 819502.47 00:06:20.008 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0x0 length 0x2000 00:06:20.008 Nvme3n1 : 6.06 176.27 11.02 0.00 0.00 589186.23 611.25 1400252.26 00:06:20.008 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:20.008 Verification LBA range: start 0x2000 length 0x2000 00:06:20.008 Nvme3n1 : 6.03 180.08 11.25 0.00 0.00 581497.49 976.74 832408.02 00:06:20.008 [2024-11-26T13:19:08.578Z] =================================================================================================================== 00:06:20.008 [2024-11-26T13:19:08.578Z] Total : 1818.52 113.66 0.00 0.00 741355.26 611.25 1400252.26 00:06:22.548 00:06:22.548 real 0m9.208s 00:06:22.548 user 0m17.501s 00:06:22.548 sys 0m0.226s 00:06:22.548 13:19:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.548 13:19:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:22.548 ************************************ 00:06:22.548 END TEST bdev_verify_big_io 00:06:22.548 ************************************ 00:06:22.548 13:19:10 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.548 13:19:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:22.548 13:19:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.548 13:19:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.548 ************************************ 00:06:22.548 START TEST bdev_write_zeroes 00:06:22.548 ************************************ 00:06:22.548 13:19:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.548 [2024-11-26 13:19:10.837909] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:22.548 [2024-11-26 13:19:10.838266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60533 ] 00:06:22.548 [2024-11-26 13:19:10.992489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.548 [2024-11-26 13:19:11.099993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.114 Running I/O for 1 seconds... 00:06:24.493 79488.00 IOPS, 310.50 MiB/s 00:06:24.493 Latency(us) 00:06:24.493 [2024-11-26T13:19:13.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:24.493 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.493 Nvme0n1 : 1.02 13170.79 51.45 0.00 0.00 9699.29 8368.44 19055.85 00:06:24.493 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.493 Nvme1n1 : 1.02 13155.74 51.39 0.00 0.00 9698.47 8418.86 18753.38 00:06:24.493 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.493 Nvme2n1 : 1.02 13140.73 51.33 0.00 0.00 9684.25 8368.44 18350.08 00:06:24.493 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.493 Nvme2n2 : 1.02 13125.86 51.27 0.00 0.00 9681.39 8368.44 17845.96 00:06:24.493 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.493 Nvme2n3 : 1.03 13111.07 51.22 0.00 0.00 9675.67 7612.26 17543.48 00:06:24.493 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.493 Nvme3n1 : 1.03 13095.94 51.16 0.00 0.00 9651.53 5419.32 19055.85 00:06:24.493 [2024-11-26T13:19:13.063Z] =================================================================================================================== 00:06:24.493 [2024-11-26T13:19:13.063Z] Total : 78800.14 307.81 0.00 0.00 9681.77 5419.32 19055.85 00:06:25.057 00:06:25.057 real 0m2.670s 00:06:25.057 user 0m2.361s 00:06:25.057 sys 0m0.194s 00:06:25.057 13:19:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.057 13:19:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:25.057 ************************************ 00:06:25.057 END TEST bdev_write_zeroes 00:06:25.057 ************************************ 00:06:25.057 13:19:13 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:25.057 13:19:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:25.057 13:19:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.057 13:19:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.057 ************************************ 00:06:25.057 START TEST bdev_json_nonenclosed 00:06:25.057 ************************************ 00:06:25.057 13:19:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:25.057 [2024-11-26 13:19:13.555724] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:25.057 [2024-11-26 13:19:13.555838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60586 ] 00:06:25.315 [2024-11-26 13:19:13.715138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.315 [2024-11-26 13:19:13.812054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.315 [2024-11-26 13:19:13.812127] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:25.315 [2024-11-26 13:19:13.812143] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:25.315 [2024-11-26 13:19:13.812152] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.573 00:06:25.573 real 0m0.493s 00:06:25.573 user 0m0.294s 00:06:25.573 sys 0m0.095s 00:06:25.573 13:19:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.573 13:19:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:25.573 ************************************ 00:06:25.573 END TEST bdev_json_nonenclosed 00:06:25.573 ************************************ 00:06:25.573 13:19:14 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:25.573 13:19:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:25.573 13:19:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.573 13:19:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.573 ************************************ 00:06:25.573 START TEST bdev_json_nonarray 00:06:25.573 ************************************ 00:06:25.573 13:19:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:25.573 [2024-11-26 13:19:14.092921] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:25.573 [2024-11-26 13:19:14.093030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60606 ] 00:06:25.831 [2024-11-26 13:19:14.252166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.831 [2024-11-26 13:19:14.348337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.831 [2024-11-26 13:19:14.348427] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:25.831 [2024-11-26 13:19:14.348456] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:25.831 [2024-11-26 13:19:14.348466] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.089 00:06:26.089 real 0m0.493s 00:06:26.089 user 0m0.298s 00:06:26.089 sys 0m0.092s 00:06:26.089 13:19:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.089 13:19:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:26.089 ************************************ 00:06:26.089 END TEST bdev_json_nonarray 00:06:26.089 ************************************ 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:26.089 13:19:14 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:26.089 00:06:26.089 real 0m39.337s 00:06:26.089 user 1m2.929s 00:06:26.089 sys 0m4.945s 00:06:26.089 13:19:14 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.089 13:19:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:26.089 ************************************ 00:06:26.089 END TEST blockdev_nvme 00:06:26.089 ************************************ 00:06:26.089 13:19:14 -- spdk/autotest.sh@209 -- # uname -s 00:06:26.089 13:19:14 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:26.089 13:19:14 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:26.089 13:19:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.089 13:19:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.089 13:19:14 -- common/autotest_common.sh@10 -- # set +x 00:06:26.089 ************************************ 00:06:26.089 START TEST blockdev_nvme_gpt 00:06:26.089 ************************************ 00:06:26.089 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:26.348 * Looking for test storage... 00:06:26.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:26.348 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.348 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.348 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.348 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.348 13:19:14 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:26.348 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.348 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.348 --rc genhtml_branch_coverage=1 00:06:26.348 --rc genhtml_function_coverage=1 00:06:26.348 --rc genhtml_legend=1 00:06:26.348 --rc geninfo_all_blocks=1 00:06:26.348 --rc geninfo_unexecuted_blocks=1 00:06:26.348 00:06:26.348 ' 00:06:26.348 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.348 --rc genhtml_branch_coverage=1 00:06:26.348 --rc genhtml_function_coverage=1 00:06:26.348 --rc genhtml_legend=1 00:06:26.348 --rc geninfo_all_blocks=1 00:06:26.348 --rc geninfo_unexecuted_blocks=1 00:06:26.348 00:06:26.348 ' 00:06:26.348 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.348 --rc genhtml_branch_coverage=1 00:06:26.348 --rc genhtml_function_coverage=1 00:06:26.348 --rc genhtml_legend=1 00:06:26.349 --rc geninfo_all_blocks=1 00:06:26.349 --rc geninfo_unexecuted_blocks=1 00:06:26.349 00:06:26.349 ' 00:06:26.349 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.349 --rc genhtml_branch_coverage=1 00:06:26.349 --rc genhtml_function_coverage=1 00:06:26.349 --rc genhtml_legend=1 00:06:26.349 --rc geninfo_all_blocks=1 00:06:26.349 --rc geninfo_unexecuted_blocks=1 00:06:26.349 00:06:26.349 ' 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60690 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60690 00:06:26.349 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60690 ']' 00:06:26.349 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.349 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.349 13:19:14 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:26.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.349 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.349 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.349 13:19:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:26.349 [2024-11-26 13:19:14.824387] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:26.349 [2024-11-26 13:19:14.824527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60690 ] 00:06:26.607 [2024-11-26 13:19:14.983970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.607 [2024-11-26 13:19:15.080630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.172 13:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.172 13:19:15 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:27.172 13:19:15 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:27.172 13:19:15 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:27.172 13:19:15 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:27.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:27.686 Waiting for block devices as requested 00:06:27.686 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.686 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.686 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.944 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:33.210 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:33.210 BYT; 00:06:33.210 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:33.210 BYT; 00:06:33.210 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:33.210 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:33.210 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:33.211 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:33.211 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:33.211 13:19:21 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:33.211 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:33.211 13:19:21 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:34.144 The operation has completed successfully. 00:06:34.144 13:19:22 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:35.077 The operation has completed successfully. 00:06:35.077 13:19:23 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:35.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.899 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.899 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.899 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.899 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.899 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:35.899 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.899 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.899 [] 00:06:35.899 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.899 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:35.899 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:35.899 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:35.899 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:36.157 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:36.157 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.157 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.416 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.416 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:36.416 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.416 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.416 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.416 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:36.416 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:36.416 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:36.416 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.416 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:36.416 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:36.417 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "62ad7b44-ebc0-4b1d-947f-b73caeef9a2b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "62ad7b44-ebc0-4b1d-947f-b73caeef9a2b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4bab1838-8489-4ee7-a182-175783b14cf5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4bab1838-8489-4ee7-a182-175783b14cf5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7cb54806-8e7e-4019-a866-1f0334d6d242"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7cb54806-8e7e-4019-a866-1f0334d6d242",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d1eaa752-92c6-4d57-b7c2-efefca6bc43e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d1eaa752-92c6-4d57-b7c2-efefca6bc43e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "21cd65ca-9b19-407f-ade5-0f4a8f0dc452"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "21cd65ca-9b19-407f-ade5-0f4a8f0dc452",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:36.417 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:36.417 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:36.417 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:36.417 13:19:24 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60690 00:06:36.417 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60690 ']' 00:06:36.417 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60690 00:06:36.417 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:36.417 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.417 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60690 00:06:36.417 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.417 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.417 killing process with pid 60690 00:06:36.417 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60690' 00:06:36.417 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60690 00:06:36.417 13:19:24 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60690 00:06:37.793 13:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:37.793 13:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:37.793 13:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:37.793 13:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.793 13:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:37.793 ************************************ 00:06:37.793 START TEST bdev_hello_world 00:06:37.793 ************************************ 00:06:37.793 13:19:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:37.793 [2024-11-26 13:19:26.174804] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:37.793 [2024-11-26 13:19:26.174921] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61308 ] 00:06:37.793 [2024-11-26 13:19:26.329734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.052 [2024-11-26 13:19:26.403925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.686 [2024-11-26 13:19:26.890088] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:38.686 [2024-11-26 13:19:26.890119] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:38.686 [2024-11-26 13:19:26.890134] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:38.686 [2024-11-26 13:19:26.892057] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:38.686 [2024-11-26 13:19:26.892734] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:38.686 [2024-11-26 13:19:26.892755] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:38.686 [2024-11-26 13:19:26.892975] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:38.686 00:06:38.686 [2024-11-26 13:19:26.892996] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:38.958 00:06:38.958 real 0m1.330s 00:06:38.958 user 0m1.064s 00:06:38.958 sys 0m0.161s 00:06:38.958 13:19:27 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.958 13:19:27 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:38.958 ************************************ 00:06:38.958 END TEST bdev_hello_world 00:06:38.958 ************************************ 00:06:38.958 13:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:38.958 13:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.958 13:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.958 13:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.958 ************************************ 00:06:38.958 START TEST bdev_bounds 00:06:38.958 ************************************ 00:06:38.958 13:19:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:38.958 Process bdevio pid: 61345 00:06:38.958 13:19:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61345 00:06:38.959 13:19:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.959 13:19:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61345' 00:06:38.959 13:19:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61345 00:06:38.959 13:19:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61345 ']' 00:06:38.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.959 13:19:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.959 13:19:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.959 13:19:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.959 13:19:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.959 13:19:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:38.959 13:19:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:39.279 [2024-11-26 13:19:27.565171] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:39.279 [2024-11-26 13:19:27.565287] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61345 ] 00:06:39.279 [2024-11-26 13:19:27.719071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.279 [2024-11-26 13:19:27.798317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.279 [2024-11-26 13:19:27.798757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.279 [2024-11-26 13:19:27.798757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.935 13:19:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.936 13:19:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:39.936 13:19:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:39.936 I/O targets: 00:06:39.936 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:39.936 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:39.936 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:39.936 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.936 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.936 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.936 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:39.936 00:06:39.936 00:06:39.936 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.936 http://cunit.sourceforge.net/ 00:06:39.936 00:06:39.936 00:06:39.936 Suite: bdevio tests on: Nvme3n1 00:06:39.936 Test: blockdev write read block ...passed 00:06:39.936 Test: blockdev write zeroes read block ...passed 00:06:39.936 Test: blockdev write zeroes read no split ...passed 00:06:40.200 Test: blockdev write zeroes read split ...passed 00:06:40.200 Test: blockdev write zeroes read split partial ...passed 00:06:40.200 Test: blockdev reset ...[2024-11-26 13:19:28.534158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:40.200 [2024-11-26 13:19:28.537982] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:40.200 passed 00:06:40.200 Test: blockdev write read 8 blocks ...passed 00:06:40.200 Test: blockdev write read size > 128k ...passed 00:06:40.200 Test: blockdev write read invalid size ...passed 00:06:40.200 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.200 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.200 Test: blockdev write read max offset ...passed 00:06:40.200 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.200 Test: blockdev writev readv 8 blocks ...passed 00:06:40.200 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.200 Test: blockdev writev readv block ...passed 00:06:40.200 Test: blockdev writev readv size > 128k ...passed 00:06:40.200 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.200 Test: blockdev comparev and writev ...[2024-11-26 13:19:28.557059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8404000 len:0x1000 00:06:40.200 [2024-11-26 13:19:28.557255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.200 passed 00:06:40.200 Test: blockdev nvme passthru rw ...passed 00:06:40.200 Test: blockdev nvme passthru vendor specific ...[2024-11-26 13:19:28.559271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.200 [2024-11-26 13:19:28.559456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:06:40.200 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:06:40.200 passed 00:06:40.200 Test: blockdev copy ...passed 00:06:40.200 Suite: bdevio tests on: Nvme2n3 00:06:40.200 Test: blockdev write read block ...passed 00:06:40.200 Test: blockdev write zeroes read block ...passed 00:06:40.200 Test: blockdev write zeroes read no split ...passed 00:06:40.200 Test: blockdev write zeroes read split ...passed 00:06:40.200 Test: blockdev write zeroes read split partial ...passed 00:06:40.200 Test: blockdev reset ...[2024-11-26 13:19:28.615743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:40.200 [2024-11-26 13:19:28.620548] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:40.200 passed 00:06:40.200 Test: blockdev write read 8 blocks ...passed 00:06:40.200 Test: blockdev write read size > 128k ...passed 00:06:40.200 Test: blockdev write read invalid size ...passed 00:06:40.200 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.200 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.200 Test: blockdev write read max offset ...passed 00:06:40.200 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.200 Test: blockdev writev readv 8 blocks ...passed 00:06:40.200 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.200 Test: blockdev writev readv block ...passed 00:06:40.200 Test: blockdev writev readv size > 128k ...passed 00:06:40.200 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.200 Test: blockdev comparev and writev ...[2024-11-26 13:19:28.638569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8402000 len:0x1000 00:06:40.200 [2024-11-26 13:19:28.638742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.200 passed 00:06:40.200 Test: blockdev nvme passthru rw ...passed 00:06:40.200 Test: blockdev nvme passthru vendor specific ...[2024-11-26 13:19:28.641187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.200 passed 00:06:40.200 Test: blockdev nvme admin passthru ...[2024-11-26 13:19:28.641279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.200 passed 00:06:40.200 Test: blockdev copy ...passed 00:06:40.200 Suite: bdevio tests on: Nvme2n2 00:06:40.200 Test: blockdev write read block ...passed 00:06:40.200 Test: blockdev write zeroes read block ...passed 00:06:40.200 Test: blockdev write zeroes read no split ...passed 00:06:40.200 Test: blockdev write zeroes read split ...passed 00:06:40.200 Test: blockdev write zeroes read split partial ...passed 00:06:40.200 Test: blockdev reset ...[2024-11-26 13:19:28.697073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:40.200 [2024-11-26 13:19:28.701138] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:40.200 passed 00:06:40.200 Test: blockdev write read 8 blocks ...passed 00:06:40.200 Test: blockdev write read size > 128k ...passed 00:06:40.200 Test: blockdev write read invalid size ...passed 00:06:40.200 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.200 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.200 Test: blockdev write read max offset ...passed 00:06:40.200 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.200 Test: blockdev writev readv 8 blocks ...passed 00:06:40.200 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.200 Test: blockdev writev readv block ...passed 00:06:40.200 Test: blockdev writev readv size > 128k ...passed 00:06:40.200 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.200 Test: blockdev comparev and writev ...[2024-11-26 13:19:28.718117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac838000 len:0x1000 00:06:40.200 [2024-11-26 13:19:28.718214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.200 passed 00:06:40.200 Test: blockdev nvme passthru rw ...passed 00:06:40.200 Test: blockdev nvme passthru vendor specific ...[2024-11-26 13:19:28.720154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.200 [2024-11-26 13:19:28.720234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.200 passed 00:06:40.200 Test: blockdev nvme admin passthru ...passed 00:06:40.200 Test: blockdev copy ...passed 00:06:40.201 Suite: bdevio tests on: Nvme2n1 00:06:40.201 Test: blockdev write read block ...passed 00:06:40.201 Test: blockdev write zeroes read block ...passed 00:06:40.201 Test: blockdev write zeroes read no split ...passed 00:06:40.201 Test: blockdev write zeroes read split ...passed 00:06:40.458 Test: blockdev write zeroes read split partial ...passed 00:06:40.458 Test: blockdev reset ...[2024-11-26 13:19:28.778813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:40.458 [2024-11-26 13:19:28.782418] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:40.458 passed 00:06:40.458 Test: blockdev write read 8 blocks ...passed 00:06:40.458 Test: blockdev write read size > 128k ...passed 00:06:40.458 Test: blockdev write read invalid size ...passed 00:06:40.458 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.458 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.458 Test: blockdev write read max offset ...passed 00:06:40.458 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.458 Test: blockdev writev readv 8 blocks ...passed 00:06:40.458 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.459 Test: blockdev writev readv block ...passed 00:06:40.459 Test: blockdev writev readv size > 128k ...passed 00:06:40.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.459 Test: blockdev comparev and writev ...[2024-11-26 13:19:28.799467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac834000 len:0x1000 00:06:40.459 [2024-11-26 13:19:28.799615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.459 passed 00:06:40.459 Test: blockdev nvme passthru rw ...passed 00:06:40.459 Test: blockdev nvme passthru vendor specific ...[2024-11-26 13:19:28.801804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:40.459 [2024-11-26 13:19:28.801892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:40.459 passed 00:06:40.459 Test: blockdev nvme admin passthru ...passed 00:06:40.459 Test: blockdev copy ...passed 00:06:40.459 Suite: bdevio tests on: Nvme1n1p2 00:06:40.459 Test: blockdev write read block ...passed 00:06:40.459 Test: blockdev write zeroes read block ...passed 00:06:40.459 Test: blockdev write zeroes read no split ...passed 00:06:40.459 Test: blockdev write zeroes read split ...passed 00:06:40.459 Test: blockdev write zeroes read split partial ...passed 00:06:40.459 Test: blockdev reset ...[2024-11-26 13:19:28.862781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:40.459 [2024-11-26 13:19:28.865914] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:40.459 passed 00:06:40.459 Test: blockdev write read 8 blocks ...passed 00:06:40.459 Test: blockdev write read size > 128k ...passed 00:06:40.459 Test: blockdev write read invalid size ...passed 00:06:40.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.459 Test: blockdev write read max offset ...passed 00:06:40.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.459 Test: blockdev writev readv 8 blocks ...passed 00:06:40.459 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.459 Test: blockdev writev readv block ...passed 00:06:40.459 Test: blockdev writev readv size > 128k ...passed 00:06:40.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.459 Test: blockdev comparev and writev ...[2024-11-26 13:19:28.882925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ac830000 len:0x1000 00:06:40.459 [2024-11-26 13:19:28.883057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.459 passed 00:06:40.459 Test: blockdev nvme passthru rw ...passed 00:06:40.459 Test: blockdev nvme passthru vendor specific ...passed 00:06:40.459 Test: blockdev nvme admin passthru ...passed 00:06:40.459 Test: blockdev copy ...passed 00:06:40.459 Suite: bdevio tests on: Nvme1n1p1 00:06:40.459 Test: blockdev write read block ...passed 00:06:40.459 Test: blockdev write zeroes read block ...passed 00:06:40.459 Test: blockdev write zeroes read no split ...passed 00:06:40.459 Test: blockdev write zeroes read split ...passed 00:06:40.459 Test: blockdev write zeroes read split partial ...passed 00:06:40.459 Test: blockdev reset ...[2024-11-26 13:19:28.933665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:40.459 [2024-11-26 13:19:28.937675] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:40.459 passed 00:06:40.459 Test: blockdev write read 8 blocks ...passed 00:06:40.459 Test: blockdev write read size > 128k ...passed 00:06:40.459 Test: blockdev write read invalid size ...passed 00:06:40.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.459 Test: blockdev write read max offset ...passed 00:06:40.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.459 Test: blockdev writev readv 8 blocks ...passed 00:06:40.459 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.459 Test: blockdev writev readv block ...passed 00:06:40.459 Test: blockdev writev readv size > 128k ...passed 00:06:40.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.459 Test: blockdev comparev and writev ...[2024-11-26 13:19:28.947385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x290e0e000 len:0x1000 00:06:40.459 [2024-11-26 13:19:28.947493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:40.459 passed 00:06:40.459 Test: blockdev nvme passthru rw ...passed 00:06:40.459 Test: blockdev nvme passthru vendor specific ...passed 00:06:40.459 Test: blockdev nvme admin passthru ...passed 00:06:40.459 Test: blockdev copy ...passed 00:06:40.459 Suite: bdevio tests on: Nvme0n1 00:06:40.459 Test: blockdev write read block ...passed 00:06:40.459 Test: blockdev write zeroes read block ...passed 00:06:40.459 Test: blockdev write zeroes read no split ...passed 00:06:40.459 Test: blockdev write zeroes read split ...passed 00:06:40.459 Test: blockdev write zeroes read split partial ...passed 00:06:40.459 Test: blockdev reset ...[2024-11-26 13:19:28.994533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:40.459 [2024-11-26 13:19:28.997830] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:40.459 passed 00:06:40.459 Test: blockdev write read 8 blocks ...passed 00:06:40.459 Test: blockdev write read size > 128k ...passed 00:06:40.459 Test: blockdev write read invalid size ...passed 00:06:40.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.459 Test: blockdev write read max offset ...passed 00:06:40.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.459 Test: blockdev writev readv 8 blocks ...passed 00:06:40.459 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.459 Test: blockdev writev readv block ...passed 00:06:40.459 Test: blockdev writev readv size > 128k ...passed 00:06:40.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.459 Test: blockdev comparev and writev ...passed 00:06:40.459 Test: blockdev nvme passthru rw ...[2024-11-26 13:19:29.013218] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:40.459 separate metadata which is not supported yet. 00:06:40.459 passed 00:06:40.459 Test: blockdev nvme passthru vendor specific ...[2024-11-26 13:19:29.014808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:40.459 [2024-11-26 13:19:29.014896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:40.459 passed 00:06:40.721 Test: blockdev nvme admin passthru ...passed 00:06:40.721 Test: blockdev copy ...passed 00:06:40.721 00:06:40.721 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.721 suites 7 7 n/a 0 0 00:06:40.721 tests 161 161 161 0 0 00:06:40.721 asserts 1025 1025 1025 0 n/a 00:06:40.721 00:06:40.721 Elapsed time = 1.359 seconds 00:06:40.721 0 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61345 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61345 ']' 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61345 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61345 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61345' 00:06:40.721 killing process with pid 61345 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61345 00:06:40.721 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61345 00:06:41.289 13:19:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:41.289 00:06:41.289 real 0m2.235s 00:06:41.289 user 0m5.691s 00:06:41.289 sys 0m0.269s 00:06:41.289 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.289 13:19:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:41.289 ************************************ 00:06:41.289 END TEST bdev_bounds 00:06:41.289 ************************************ 00:06:41.289 13:19:29 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:41.290 13:19:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:41.290 13:19:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.290 13:19:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.290 ************************************ 00:06:41.290 START TEST bdev_nbd 00:06:41.290 ************************************ 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61400 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61400 /var/tmp/spdk-nbd.sock 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61400 ']' 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.290 13:19:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:41.565 [2024-11-26 13:19:29.864346] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:41.565 [2024-11-26 13:19:29.864476] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.565 [2024-11-26 13:19:30.026488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.832 [2024-11-26 13:19:30.128353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.399 1+0 records in 00:06:42.399 1+0 records out 00:06:42.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446713 s, 9.2 MB/s 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.399 13:19:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.659 1+0 records in 00:06:42.659 1+0 records out 00:06:42.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000935145 s, 4.4 MB/s 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.659 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.917 1+0 records in 00:06:42.917 1+0 records out 00:06:42.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112167 s, 3.7 MB/s 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.917 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.176 1+0 records in 00:06:43.176 1+0 records out 00:06:43.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560884 s, 7.3 MB/s 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.176 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.434 1+0 records in 00:06:43.434 1+0 records out 00:06:43.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00343792 s, 1.2 MB/s 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.434 13:19:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.693 1+0 records in 00:06:43.693 1+0 records out 00:06:43.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000754714 s, 5.4 MB/s 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.693 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.952 1+0 records in 00:06:43.952 1+0 records out 00:06:43.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102311 s, 4.0 MB/s 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd0", 00:06:43.952 "bdev_name": "Nvme0n1" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd1", 00:06:43.952 "bdev_name": "Nvme1n1p1" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd2", 00:06:43.952 "bdev_name": "Nvme1n1p2" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd3", 00:06:43.952 "bdev_name": "Nvme2n1" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd4", 00:06:43.952 "bdev_name": "Nvme2n2" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd5", 00:06:43.952 "bdev_name": "Nvme2n3" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd6", 00:06:43.952 "bdev_name": "Nvme3n1" 00:06:43.952 } 00:06:43.952 ]' 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:43.952 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd0", 00:06:43.952 "bdev_name": "Nvme0n1" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd1", 00:06:43.952 "bdev_name": "Nvme1n1p1" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd2", 00:06:43.952 "bdev_name": "Nvme1n1p2" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd3", 00:06:43.952 "bdev_name": "Nvme2n1" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd4", 00:06:43.952 "bdev_name": "Nvme2n2" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd5", 00:06:43.952 "bdev_name": "Nvme2n3" 00:06:43.952 }, 00:06:43.952 { 00:06:43.952 "nbd_device": "/dev/nbd6", 00:06:43.952 "bdev_name": "Nvme3n1" 00:06:43.952 } 00:06:43.952 ]' 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.210 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.468 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.468 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.468 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.468 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.468 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.468 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.468 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.468 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.468 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.468 13:19:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:44.727 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:44.727 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:44.727 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:44.727 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.727 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.727 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:44.727 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.727 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.727 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.727 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.986 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.244 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.503 13:19:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:45.761 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:46.020 /dev/nbd0 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.020 1+0 records in 00:06:46.020 1+0 records out 00:06:46.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527688 s, 7.8 MB/s 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.020 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:46.278 /dev/nbd1 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.278 1+0 records in 00:06:46.278 1+0 records out 00:06:46.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119201 s, 3.4 MB/s 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.278 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.279 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:46.537 /dev/nbd10 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.537 1+0 records in 00:06:46.537 1+0 records out 00:06:46.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517006 s, 7.9 MB/s 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.537 13:19:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:46.796 /dev/nbd11 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.796 1+0 records in 00:06:46.796 1+0 records out 00:06:46.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772552 s, 5.3 MB/s 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:46.796 /dev/nbd12 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.796 1+0 records in 00:06:46.796 1+0 records out 00:06:46.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000978204 s, 4.2 MB/s 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.796 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:47.057 /dev/nbd13 00:06:47.057 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.058 1+0 records in 00:06:47.058 1+0 records out 00:06:47.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000922711 s, 4.4 MB/s 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:47.058 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:47.320 /dev/nbd14 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.320 1+0 records in 00:06:47.320 1+0 records out 00:06:47.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113631 s, 3.6 MB/s 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.320 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.581 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd0", 00:06:47.581 "bdev_name": "Nvme0n1" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd1", 00:06:47.581 "bdev_name": "Nvme1n1p1" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd10", 00:06:47.581 "bdev_name": "Nvme1n1p2" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd11", 00:06:47.581 "bdev_name": "Nvme2n1" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd12", 00:06:47.581 "bdev_name": "Nvme2n2" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd13", 00:06:47.581 "bdev_name": "Nvme2n3" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd14", 00:06:47.581 "bdev_name": "Nvme3n1" 00:06:47.581 } 00:06:47.581 ]' 00:06:47.581 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd0", 00:06:47.581 "bdev_name": "Nvme0n1" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd1", 00:06:47.581 "bdev_name": "Nvme1n1p1" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd10", 00:06:47.581 "bdev_name": "Nvme1n1p2" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd11", 00:06:47.581 "bdev_name": "Nvme2n1" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd12", 00:06:47.581 "bdev_name": "Nvme2n2" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd13", 00:06:47.581 "bdev_name": "Nvme2n3" 00:06:47.581 }, 00:06:47.581 { 00:06:47.581 "nbd_device": "/dev/nbd14", 00:06:47.581 "bdev_name": "Nvme3n1" 00:06:47.581 } 00:06:47.581 ]' 00:06:47.581 13:19:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.581 /dev/nbd1 00:06:47.581 /dev/nbd10 00:06:47.581 /dev/nbd11 00:06:47.581 /dev/nbd12 00:06:47.581 /dev/nbd13 00:06:47.581 /dev/nbd14' 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.581 /dev/nbd1 00:06:47.581 /dev/nbd10 00:06:47.581 /dev/nbd11 00:06:47.581 /dev/nbd12 00:06:47.581 /dev/nbd13 00:06:47.581 /dev/nbd14' 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:47.581 256+0 records in 00:06:47.581 256+0 records out 00:06:47.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00609894 s, 172 MB/s 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.581 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.841 256+0 records in 00:06:47.841 256+0 records out 00:06:47.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191964 s, 5.5 MB/s 00:06:47.841 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.841 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:48.101 256+0 records in 00:06:48.101 256+0 records out 00:06:48.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177236 s, 5.9 MB/s 00:06:48.101 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.101 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:48.361 256+0 records in 00:06:48.361 256+0 records out 00:06:48.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.259802 s, 4.0 MB/s 00:06:48.361 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.361 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:48.620 256+0 records in 00:06:48.620 256+0 records out 00:06:48.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238967 s, 4.4 MB/s 00:06:48.620 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.620 13:19:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:48.620 256+0 records in 00:06:48.620 256+0 records out 00:06:48.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204635 s, 5.1 MB/s 00:06:48.620 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.621 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:48.881 256+0 records in 00:06:48.881 256+0 records out 00:06:48.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210499 s, 5.0 MB/s 00:06:48.881 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.881 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:49.141 256+0 records in 00:06:49.141 256+0 records out 00:06:49.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134702 s, 7.8 MB/s 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.141 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.402 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.402 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.402 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.402 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.402 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.402 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.402 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.402 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.402 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.402 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.663 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.663 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.663 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.663 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.663 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.663 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.663 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.663 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.663 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.663 13:19:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:49.663 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:49.663 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:49.663 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:49.663 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.663 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.663 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:49.663 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.663 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.663 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.663 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:49.923 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:49.923 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:49.923 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:49.923 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.923 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.923 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:49.923 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.923 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.923 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.923 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:50.184 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:50.184 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:50.184 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:50.184 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.184 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.184 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:50.184 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.184 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.184 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.184 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:50.443 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:50.443 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:50.443 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:50.443 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.443 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.444 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:50.444 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.444 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.444 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.444 13:19:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:50.707 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:50.707 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:50.707 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:50.707 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.707 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.708 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:50.708 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.708 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.708 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.708 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.708 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.708 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.708 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.708 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:50.971 malloc_lvol_verify 00:06:50.971 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:51.230 af97995f-0f25-43d3-bd38-6af235fd0ffc 00:06:51.230 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:51.488 38626db5-4f64-4bcb-932b-10d7a0550abb 00:06:51.488 13:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:51.749 /dev/nbd0 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:51.749 mke2fs 1.47.0 (5-Feb-2023) 00:06:51.749 Discarding device blocks: 0/4096 done 00:06:51.749 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:51.749 00:06:51.749 Allocating group tables: 0/1 done 00:06:51.749 Writing inode tables: 0/1 done 00:06:51.749 Creating journal (1024 blocks): done 00:06:51.749 Writing superblocks and filesystem accounting information: 0/1 done 00:06:51.749 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.749 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61400 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61400 ']' 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61400 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61400 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61400' 00:06:52.010 killing process with pid 61400 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61400 00:06:52.010 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61400 00:06:52.574 13:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:52.574 00:06:52.574 real 0m11.180s 00:06:52.574 user 0m15.338s 00:06:52.574 sys 0m3.632s 00:06:52.574 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.574 ************************************ 00:06:52.574 END TEST bdev_nbd 00:06:52.574 ************************************ 00:06:52.574 13:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:52.574 13:19:41 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:52.574 13:19:41 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:52.574 skipping fio tests on NVMe due to multi-ns failures. 00:06:52.574 13:19:41 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:52.574 13:19:41 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:52.575 13:19:41 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:52.575 13:19:41 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:52.575 13:19:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:52.575 13:19:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.575 13:19:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.575 ************************************ 00:06:52.575 START TEST bdev_verify 00:06:52.575 ************************************ 00:06:52.575 13:19:41 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:52.575 [2024-11-26 13:19:41.082610] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:52.575 [2024-11-26 13:19:41.082725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61815 ] 00:06:52.833 [2024-11-26 13:19:41.237453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.833 [2024-11-26 13:19:41.317291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.833 [2024-11-26 13:19:41.317294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.399 Running I/O for 5 seconds... 00:06:55.724 21504.00 IOPS, 84.00 MiB/s [2024-11-26T13:19:45.239Z] 22400.00 IOPS, 87.50 MiB/s [2024-11-26T13:19:46.182Z] 22698.67 IOPS, 88.67 MiB/s [2024-11-26T13:19:47.126Z] 22176.00 IOPS, 86.62 MiB/s [2024-11-26T13:19:47.126Z] 22246.40 IOPS, 86.90 MiB/s 00:06:58.556 Latency(us) 00:06:58.556 [2024-11-26T13:19:47.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.556 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x0 length 0xbd0bd 00:06:58.556 Nvme0n1 : 5.07 1566.29 6.12 0.00 0.00 81472.79 15829.46 93161.94 00:06:58.556 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:58.556 Nvme0n1 : 5.08 1563.72 6.11 0.00 0.00 81654.34 11544.42 91548.75 00:06:58.556 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x0 length 0x4ff80 00:06:58.556 Nvme1n1p1 : 5.07 1565.81 6.12 0.00 0.00 81366.01 17341.83 83079.48 00:06:58.556 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:58.556 Nvme1n1p1 : 5.08 1563.27 6.11 0.00 0.00 81486.61 12149.37 77433.30 00:06:58.556 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x0 length 0x4ff7f 00:06:58.556 Nvme1n1p2 : 5.07 1565.33 6.11 0.00 0.00 81215.96 18753.38 78239.90 00:06:58.556 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:58.556 Nvme1n1p2 : 5.08 1562.80 6.10 0.00 0.00 81351.81 11897.30 74206.92 00:06:58.556 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x0 length 0x80000 00:06:58.556 Nvme2n1 : 5.07 1564.87 6.11 0.00 0.00 81067.84 18551.73 68964.04 00:06:58.556 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x80000 length 0x80000 00:06:58.556 Nvme2n1 : 5.08 1562.37 6.10 0.00 0.00 81188.81 12300.60 67754.14 00:06:58.556 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x0 length 0x80000 00:06:58.556 Nvme2n2 : 5.07 1564.33 6.11 0.00 0.00 80912.30 18652.55 70577.23 00:06:58.556 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x80000 length 0x80000 00:06:58.556 Nvme2n2 : 5.08 1561.94 6.10 0.00 0.00 81026.14 12653.49 68157.44 00:06:58.556 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x0 length 0x80000 00:06:58.556 Nvme2n3 : 5.08 1574.42 6.15 0.00 0.00 80275.70 2445.00 73400.32 00:06:58.556 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x80000 length 0x80000 00:06:58.556 Nvme2n3 : 5.08 1561.04 6.10 0.00 0.00 80875.18 12653.49 72190.42 00:06:58.556 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x0 length 0x20000 00:06:58.556 Nvme3n1 : 5.08 1573.51 6.15 0.00 0.00 80121.43 4335.46 75820.11 00:06:58.556 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:58.556 Verification LBA range: start 0x20000 length 0x20000 00:06:58.556 Nvme3n1 : 5.09 1560.20 6.09 0.00 0.00 80716.87 12149.37 76223.41 00:06:58.556 [2024-11-26T13:19:47.126Z] =================================================================================================================== 00:06:58.556 [2024-11-26T13:19:47.126Z] Total : 21909.91 85.59 0.00 0.00 81051.29 2445.00 93161.94 00:06:59.939 00:06:59.939 real 0m7.164s 00:06:59.939 user 0m13.493s 00:06:59.939 sys 0m0.190s 00:06:59.939 13:19:48 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.939 13:19:48 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:59.939 ************************************ 00:06:59.939 END TEST bdev_verify 00:06:59.939 ************************************ 00:06:59.939 13:19:48 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:59.939 13:19:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:59.939 13:19:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.939 13:19:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.939 ************************************ 00:06:59.939 START TEST bdev_verify_big_io 00:06:59.939 ************************************ 00:06:59.939 13:19:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:59.939 [2024-11-26 13:19:48.283634] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:06:59.939 [2024-11-26 13:19:48.283739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61913 ] 00:06:59.939 [2024-11-26 13:19:48.443836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.201 [2024-11-26 13:19:48.541322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.201 [2024-11-26 13:19:48.541409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.773 Running I/O for 5 seconds... 00:07:04.259 1325.00 IOPS, 82.81 MiB/s [2024-11-26T13:19:54.732Z] 1451.50 IOPS, 90.72 MiB/s [2024-11-26T13:19:55.303Z] 1776.67 IOPS, 111.04 MiB/s [2024-11-26T13:19:55.564Z] 2757.50 IOPS, 172.34 MiB/s 00:07:06.994 Latency(us) 00:07:06.994 [2024-11-26T13:19:55.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.994 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x0 length 0xbd0b 00:07:06.994 Nvme0n1 : 5.59 118.14 7.38 0.00 0.00 1030547.57 12351.02 1258291.20 00:07:06.994 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:06.994 Nvme0n1 : 5.75 129.99 8.12 0.00 0.00 941423.05 22887.19 1090519.04 00:07:06.994 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x0 length 0x4ff8 00:07:06.994 Nvme1n1p1 : 5.74 122.74 7.67 0.00 0.00 967476.35 100824.62 1071160.71 00:07:06.994 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:06.994 Nvme1n1p1 : 5.86 131.31 8.21 0.00 0.00 897087.08 91952.05 903388.55 00:07:06.994 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x0 length 0x4ff7 00:07:06.994 Nvme1n1p2 : 5.83 120.90 7.56 0.00 0.00 952273.48 93565.24 1490591.11 00:07:06.994 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:06.994 Nvme1n1p2 : 5.86 130.94 8.18 0.00 0.00 872037.49 95581.74 896935.78 00:07:06.994 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x0 length 0x8000 00:07:06.994 Nvme2n1 : 6.00 120.40 7.53 0.00 0.00 918138.83 93968.54 1716438.25 00:07:06.994 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x8000 length 0x8000 00:07:06.994 Nvme2n1 : 5.91 135.68 8.48 0.00 0.00 828362.97 107277.39 967916.31 00:07:06.994 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x0 length 0x8000 00:07:06.994 Nvme2n2 : 6.02 135.55 8.47 0.00 0.00 803280.34 21273.99 1309913.40 00:07:06.994 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x8000 length 0x8000 00:07:06.994 Nvme2n2 : 5.97 145.78 9.11 0.00 0.00 760993.69 18551.73 851766.35 00:07:06.994 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x0 length 0x8000 00:07:06.994 Nvme2n3 : 6.05 139.27 8.70 0.00 0.00 757011.00 12804.73 1587382.74 00:07:06.994 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x8000 length 0x8000 00:07:06.994 Nvme2n3 : 5.98 149.92 9.37 0.00 0.00 723720.89 43354.58 877577.45 00:07:06.994 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x0 length 0x2000 00:07:06.994 Nvme3n1 : 6.12 185.25 11.58 0.00 0.00 558231.67 385.97 1619646.62 00:07:06.994 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:06.994 Verification LBA range: start 0x2000 length 0x2000 00:07:06.994 Nvme3n1 : 6.02 166.16 10.39 0.00 0.00 638855.46 2608.84 896935.78 00:07:06.994 [2024-11-26T13:19:55.564Z] =================================================================================================================== 00:07:06.994 [2024-11-26T13:19:55.564Z] Total : 1932.03 120.75 0.00 0.00 813147.23 385.97 1716438.25 00:07:08.378 00:07:08.378 real 0m8.664s 00:07:08.378 user 0m16.483s 00:07:08.378 sys 0m0.208s 00:07:08.378 13:19:56 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.378 13:19:56 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:08.378 ************************************ 00:07:08.378 END TEST bdev_verify_big_io 00:07:08.378 ************************************ 00:07:08.378 13:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.378 13:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:08.378 13:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.378 13:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:08.378 ************************************ 00:07:08.378 START TEST bdev_write_zeroes 00:07:08.378 ************************************ 00:07:08.378 13:19:56 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.639 [2024-11-26 13:19:56.986079] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:08.639 [2024-11-26 13:19:56.986193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62023 ] 00:07:08.639 [2024-11-26 13:19:57.142155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.900 [2024-11-26 13:19:57.222817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.470 Running I/O for 1 seconds... 00:07:10.411 69440.00 IOPS, 271.25 MiB/s 00:07:10.411 Latency(us) 00:07:10.411 [2024-11-26T13:19:58.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.411 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.411 Nvme0n1 : 1.03 9858.18 38.51 0.00 0.00 12953.36 9427.10 27021.00 00:07:10.411 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.411 Nvme1n1p1 : 1.03 9846.06 38.46 0.00 0.00 12957.03 9275.86 27021.00 00:07:10.411 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.411 Nvme1n1p2 : 1.03 9834.07 38.41 0.00 0.00 12930.29 9275.86 26416.05 00:07:10.411 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.411 Nvme2n1 : 1.03 9823.03 38.37 0.00 0.00 12874.61 8015.56 26012.75 00:07:10.411 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.411 Nvme2n2 : 1.03 9811.91 38.33 0.00 0.00 12863.80 7208.96 26012.75 00:07:10.411 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.411 Nvme2n3 : 1.03 9800.83 38.28 0.00 0.00 12858.63 7057.72 25508.63 00:07:10.411 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.411 Nvme3n1 : 1.03 9789.88 38.24 0.00 0.00 12855.80 7007.31 26214.40 00:07:10.411 [2024-11-26T13:19:58.981Z] =================================================================================================================== 00:07:10.411 [2024-11-26T13:19:58.981Z] Total : 68763.96 268.61 0.00 0.00 12899.07 7007.31 27021.00 00:07:10.982 00:07:10.982 real 0m2.591s 00:07:10.982 user 0m2.314s 00:07:10.982 sys 0m0.165s 00:07:10.982 13:19:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.982 13:19:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:10.982 ************************************ 00:07:10.982 END TEST bdev_write_zeroes 00:07:10.982 ************************************ 00:07:11.244 13:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.244 13:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:11.244 13:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.244 13:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.244 ************************************ 00:07:11.244 START TEST bdev_json_nonenclosed 00:07:11.244 ************************************ 00:07:11.244 13:19:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.244 [2024-11-26 13:19:59.618804] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:11.244 [2024-11-26 13:19:59.618918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62076 ] 00:07:11.244 [2024-11-26 13:19:59.776579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.505 [2024-11-26 13:19:59.871191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.505 [2024-11-26 13:19:59.871268] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:11.505 [2024-11-26 13:19:59.871285] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:11.505 [2024-11-26 13:19:59.871294] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.505 00:07:11.505 real 0m0.489s 00:07:11.505 user 0m0.298s 00:07:11.505 sys 0m0.088s 00:07:11.505 13:20:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.505 13:20:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:11.505 ************************************ 00:07:11.505 END TEST bdev_json_nonenclosed 00:07:11.505 ************************************ 00:07:11.767 13:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.767 13:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:11.767 13:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.767 13:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.767 ************************************ 00:07:11.767 START TEST bdev_json_nonarray 00:07:11.767 ************************************ 00:07:11.767 13:20:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.767 [2024-11-26 13:20:00.154027] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:11.767 [2024-11-26 13:20:00.154141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62096 ] 00:07:11.767 [2024-11-26 13:20:00.313669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.027 [2024-11-26 13:20:00.409474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.027 [2024-11-26 13:20:00.409549] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:12.027 [2024-11-26 13:20:00.409565] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:12.027 [2024-11-26 13:20:00.409574] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.027 00:07:12.027 real 0m0.495s 00:07:12.027 user 0m0.293s 00:07:12.027 sys 0m0.098s 00:07:12.027 13:20:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.027 13:20:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:12.027 ************************************ 00:07:12.027 END TEST bdev_json_nonarray 00:07:12.027 ************************************ 00:07:12.288 13:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:12.288 13:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:12.288 13:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:12.288 13:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.288 13:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.288 13:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.288 ************************************ 00:07:12.288 START TEST bdev_gpt_uuid 00:07:12.288 ************************************ 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62127 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62127 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62127 ']' 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.288 13:20:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:12.288 [2024-11-26 13:20:00.699790] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:12.288 [2024-11-26 13:20:00.699904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62127 ] 00:07:12.549 [2024-11-26 13:20:00.857290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.549 [2024-11-26 13:20:00.951264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.121 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.121 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:13.121 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:13.121 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.121 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:13.383 Some configs were skipped because the RPC state that can call them passed over. 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:13.383 { 00:07:13.383 "name": "Nvme1n1p1", 00:07:13.383 "aliases": [ 00:07:13.383 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:13.383 ], 00:07:13.383 "product_name": "GPT Disk", 00:07:13.383 "block_size": 4096, 00:07:13.383 "num_blocks": 655104, 00:07:13.383 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:13.383 "assigned_rate_limits": { 00:07:13.383 "rw_ios_per_sec": 0, 00:07:13.383 "rw_mbytes_per_sec": 0, 00:07:13.383 "r_mbytes_per_sec": 0, 00:07:13.383 "w_mbytes_per_sec": 0 00:07:13.383 }, 00:07:13.383 "claimed": false, 00:07:13.383 "zoned": false, 00:07:13.383 "supported_io_types": { 00:07:13.383 "read": true, 00:07:13.383 "write": true, 00:07:13.383 "unmap": true, 00:07:13.383 "flush": true, 00:07:13.383 "reset": true, 00:07:13.383 "nvme_admin": false, 00:07:13.383 "nvme_io": false, 00:07:13.383 "nvme_io_md": false, 00:07:13.383 "write_zeroes": true, 00:07:13.383 "zcopy": false, 00:07:13.383 "get_zone_info": false, 00:07:13.383 "zone_management": false, 00:07:13.383 "zone_append": false, 00:07:13.383 "compare": true, 00:07:13.383 "compare_and_write": false, 00:07:13.383 "abort": true, 00:07:13.383 "seek_hole": false, 00:07:13.383 "seek_data": false, 00:07:13.383 "copy": true, 00:07:13.383 "nvme_iov_md": false 00:07:13.383 }, 00:07:13.383 "driver_specific": { 00:07:13.383 "gpt": { 00:07:13.383 "base_bdev": "Nvme1n1", 00:07:13.383 "offset_blocks": 256, 00:07:13.383 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:13.383 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:13.383 "partition_name": "SPDK_TEST_first" 00:07:13.383 } 00:07:13.383 } 00:07:13.383 } 00:07:13.383 ]' 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:13.383 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:13.643 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:13.643 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:13.643 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.643 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:13.643 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.643 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:13.643 { 00:07:13.643 "name": "Nvme1n1p2", 00:07:13.643 "aliases": [ 00:07:13.643 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:13.643 ], 00:07:13.643 "product_name": "GPT Disk", 00:07:13.643 "block_size": 4096, 00:07:13.643 "num_blocks": 655103, 00:07:13.643 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:13.643 "assigned_rate_limits": { 00:07:13.643 "rw_ios_per_sec": 0, 00:07:13.643 "rw_mbytes_per_sec": 0, 00:07:13.643 "r_mbytes_per_sec": 0, 00:07:13.643 "w_mbytes_per_sec": 0 00:07:13.643 }, 00:07:13.643 "claimed": false, 00:07:13.643 "zoned": false, 00:07:13.643 "supported_io_types": { 00:07:13.643 "read": true, 00:07:13.643 "write": true, 00:07:13.643 "unmap": true, 00:07:13.643 "flush": true, 00:07:13.643 "reset": true, 00:07:13.643 "nvme_admin": false, 00:07:13.643 "nvme_io": false, 00:07:13.644 "nvme_io_md": false, 00:07:13.644 "write_zeroes": true, 00:07:13.644 "zcopy": false, 00:07:13.644 "get_zone_info": false, 00:07:13.644 "zone_management": false, 00:07:13.644 "zone_append": false, 00:07:13.644 "compare": true, 00:07:13.644 "compare_and_write": false, 00:07:13.644 "abort": true, 00:07:13.644 "seek_hole": false, 00:07:13.644 "seek_data": false, 00:07:13.644 "copy": true, 00:07:13.644 "nvme_iov_md": false 00:07:13.644 }, 00:07:13.644 "driver_specific": { 00:07:13.644 "gpt": { 00:07:13.644 "base_bdev": "Nvme1n1", 00:07:13.644 "offset_blocks": 655360, 00:07:13.644 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:13.644 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:13.644 "partition_name": "SPDK_TEST_second" 00:07:13.644 } 00:07:13.644 } 00:07:13.644 } 00:07:13.644 ]' 00:07:13.644 13:20:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62127 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62127 ']' 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62127 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62127 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.644 killing process with pid 62127 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62127' 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62127 00:07:13.644 13:20:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62127 00:07:15.030 00:07:15.031 real 0m2.933s 00:07:15.031 user 0m3.090s 00:07:15.031 sys 0m0.337s 00:07:15.031 13:20:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.031 13:20:03 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:15.031 ************************************ 00:07:15.031 END TEST bdev_gpt_uuid 00:07:15.031 ************************************ 00:07:15.031 13:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:15.031 13:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:15.031 13:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:15.031 13:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:15.031 13:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:15.293 13:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:15.293 13:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:15.293 13:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:15.293 13:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:15.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:15.554 Waiting for block devices as requested 00:07:15.554 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.554 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.815 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.815 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:21.105 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:21.105 13:20:09 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:21.105 13:20:09 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:21.105 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:21.105 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:21.105 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:21.105 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:21.105 13:20:09 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:21.105 00:07:21.105 real 0m54.937s 00:07:21.105 user 1m10.541s 00:07:21.105 sys 0m7.467s 00:07:21.105 13:20:09 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.105 ************************************ 00:07:21.105 END TEST blockdev_nvme_gpt 00:07:21.105 ************************************ 00:07:21.105 13:20:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.105 13:20:09 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:21.105 13:20:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.105 13:20:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.105 13:20:09 -- common/autotest_common.sh@10 -- # set +x 00:07:21.105 ************************************ 00:07:21.105 START TEST nvme 00:07:21.105 ************************************ 00:07:21.105 13:20:09 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:21.105 * Looking for test storage... 00:07:21.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:21.105 13:20:09 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.105 13:20:09 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.105 13:20:09 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.367 13:20:09 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.367 13:20:09 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.367 13:20:09 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.367 13:20:09 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.367 13:20:09 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.367 13:20:09 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.367 13:20:09 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.367 13:20:09 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.367 13:20:09 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.367 13:20:09 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.367 13:20:09 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.367 13:20:09 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.367 13:20:09 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:21.367 13:20:09 nvme -- scripts/common.sh@345 -- # : 1 00:07:21.367 13:20:09 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.367 13:20:09 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.367 13:20:09 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:21.367 13:20:09 nvme -- scripts/common.sh@353 -- # local d=1 00:07:21.367 13:20:09 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.367 13:20:09 nvme -- scripts/common.sh@355 -- # echo 1 00:07:21.367 13:20:09 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.367 13:20:09 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:21.367 13:20:09 nvme -- scripts/common.sh@353 -- # local d=2 00:07:21.367 13:20:09 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.367 13:20:09 nvme -- scripts/common.sh@355 -- # echo 2 00:07:21.367 13:20:09 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.367 13:20:09 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.367 13:20:09 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.367 13:20:09 nvme -- scripts/common.sh@368 -- # return 0 00:07:21.367 13:20:09 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.367 13:20:09 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.367 --rc genhtml_branch_coverage=1 00:07:21.367 --rc genhtml_function_coverage=1 00:07:21.367 --rc genhtml_legend=1 00:07:21.367 --rc geninfo_all_blocks=1 00:07:21.367 --rc geninfo_unexecuted_blocks=1 00:07:21.367 00:07:21.367 ' 00:07:21.367 13:20:09 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.367 --rc genhtml_branch_coverage=1 00:07:21.367 --rc genhtml_function_coverage=1 00:07:21.367 --rc genhtml_legend=1 00:07:21.367 --rc geninfo_all_blocks=1 00:07:21.367 --rc geninfo_unexecuted_blocks=1 00:07:21.367 00:07:21.367 ' 00:07:21.367 13:20:09 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.367 --rc genhtml_branch_coverage=1 00:07:21.367 --rc genhtml_function_coverage=1 00:07:21.367 --rc genhtml_legend=1 00:07:21.367 --rc geninfo_all_blocks=1 00:07:21.367 --rc geninfo_unexecuted_blocks=1 00:07:21.367 00:07:21.367 ' 00:07:21.367 13:20:09 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.367 --rc genhtml_branch_coverage=1 00:07:21.367 --rc genhtml_function_coverage=1 00:07:21.367 --rc genhtml_legend=1 00:07:21.367 --rc geninfo_all_blocks=1 00:07:21.367 --rc geninfo_unexecuted_blocks=1 00:07:21.367 00:07:21.367 ' 00:07:21.367 13:20:09 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:21.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:22.199 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.199 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.199 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.199 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.199 13:20:10 nvme -- nvme/nvme.sh@79 -- # uname 00:07:22.199 13:20:10 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:22.199 13:20:10 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:22.199 13:20:10 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:22.199 13:20:10 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:22.199 13:20:10 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:22.199 13:20:10 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:22.199 13:20:10 nvme -- common/autotest_common.sh@1075 -- # stubpid=62758 00:07:22.199 13:20:10 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:22.199 Waiting for stub to ready for secondary processes... 00:07:22.199 13:20:10 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:22.199 13:20:10 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:22.199 13:20:10 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62758 ]] 00:07:22.199 13:20:10 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:22.199 [2024-11-26 13:20:10.702004] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:07:22.199 [2024-11-26 13:20:10.702117] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:23.142 [2024-11-26 13:20:11.431780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.142 [2024-11-26 13:20:11.523646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.142 [2024-11-26 13:20:11.523919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.142 [2024-11-26 13:20:11.523934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.142 [2024-11-26 13:20:11.537195] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:23.142 [2024-11-26 13:20:11.537231] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:23.142 [2024-11-26 13:20:11.546113] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:23.142 [2024-11-26 13:20:11.546198] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:23.142 [2024-11-26 13:20:11.548397] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:23.142 [2024-11-26 13:20:11.548557] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:23.142 [2024-11-26 13:20:11.548611] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:23.142 [2024-11-26 13:20:11.550317] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:23.142 [2024-11-26 13:20:11.550464] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:23.142 [2024-11-26 13:20:11.550513] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:23.142 [2024-11-26 13:20:11.552820] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:23.142 [2024-11-26 13:20:11.552950] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:23.142 [2024-11-26 13:20:11.553000] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:23.142 [2024-11-26 13:20:11.553037] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:23.142 [2024-11-26 13:20:11.553068] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:23.143 done. 00:07:23.143 13:20:11 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:23.143 13:20:11 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:23.143 13:20:11 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:23.143 13:20:11 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:23.143 13:20:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.143 13:20:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.143 ************************************ 00:07:23.143 START TEST nvme_reset 00:07:23.143 ************************************ 00:07:23.143 13:20:11 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:23.405 Initializing NVMe Controllers 00:07:23.405 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:23.405 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:23.405 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:23.405 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:23.405 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:23.405 00:07:23.405 real 0m0.207s 00:07:23.405 user 0m0.075s 00:07:23.405 sys 0m0.087s 00:07:23.405 13:20:11 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.405 13:20:11 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:23.405 ************************************ 00:07:23.405 END TEST nvme_reset 00:07:23.405 ************************************ 00:07:23.405 13:20:11 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:23.405 13:20:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.405 13:20:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.405 13:20:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.405 ************************************ 00:07:23.405 START TEST nvme_identify 00:07:23.405 ************************************ 00:07:23.405 13:20:11 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:23.405 13:20:11 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:23.405 13:20:11 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:23.405 13:20:11 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:23.405 13:20:11 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:23.405 13:20:11 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:23.405 13:20:11 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:23.405 13:20:11 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:23.405 13:20:11 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:23.405 13:20:11 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:23.668 13:20:11 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:23.668 13:20:11 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:23.668 13:20:11 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:23.668 ===================================================== 00:07:23.668 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:23.668 ===================================================== 00:07:23.668 Controller Capabilities/Features 00:07:23.668 ================================ 00:07:23.668 Vendor ID: 1b36 00:07:23.668 Subsystem Vendor ID: 1af4 00:07:23.668 Serial Number: 12340 00:07:23.668 Model Number: QEMU NVMe Ctrl 00:07:23.668 Firmware Version: 8.0.0 00:07:23.668 Recommended Arb Burst: 6 00:07:23.668 IEEE OUI Identifier: 00 54 52 00:07:23.668 Multi-path I/O 00:07:23.668 May have multiple subsystem ports: No 00:07:23.668 May have multiple controllers: No 00:07:23.668 Associated with SR-IOV VF: No 00:07:23.668 Max Data Transfer Size: 524288 00:07:23.668 Max Number of Namespaces: 256 00:07:23.668 Max Number of I/O Queues: 64 00:07:23.668 NVMe Specification Version (VS): 1.4 00:07:23.668 NVMe Specification Version (Identify): 1.4 00:07:23.668 Maximum Queue Entries: 2048 00:07:23.668 Contiguous Queues Required: Yes 00:07:23.668 Arbitration Mechanisms Supported 00:07:23.668 Weighted Round Robin: Not Supported 00:07:23.668 Vendor Specific: Not Supported 00:07:23.668 Reset Timeout: 7500 ms 00:07:23.668 Doorbell Stride: 4 bytes 00:07:23.668 NVM Subsystem Reset: Not Supported 00:07:23.668 Command Sets Supported 00:07:23.668 NVM Command Set: Supported 00:07:23.668 Boot Partition: Not Supported 00:07:23.668 Memory Page Size Minimum: 4096 bytes 00:07:23.668 Memory Page Size Maximum: 65536 bytes 00:07:23.668 Persistent Memory Region: Not Supported 00:07:23.668 Optional Asynchronous Events Supported 00:07:23.668 Namespace Attribute Notices: Supported 00:07:23.668 Firmware Activation Notices: Not Supported 00:07:23.668 ANA Change Notices: Not Supported 00:07:23.668 PLE Aggregate Log Change Notices: Not Supported 00:07:23.668 LBA Status Info Alert Notices: Not Supported 00:07:23.668 EGE Aggregate Log Change Notices: Not Supported 00:07:23.668 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.668 Zone Descriptor Change Notices: Not Supported 00:07:23.668 Discovery Log Change Notices: Not Supported 00:07:23.668 Controller Attributes 00:07:23.668 128-bit Host Identifier: Not Supported 00:07:23.668 Non-Operational Permissive Mode: Not Supported 00:07:23.668 NVM Sets: Not Supported 00:07:23.668 Read Recovery Levels: Not Supported 00:07:23.668 Endurance Groups: Not Supported 00:07:23.668 Predictable Latency Mode: Not Supported 00:07:23.668 Traffic Based Keep ALive: Not Supported 00:07:23.668 Namespace Granularity: Not Supported 00:07:23.668 SQ Associations: Not Supported 00:07:23.668 UUID List: Not Supported 00:07:23.668 Multi-Domain Subsystem: Not Supported 00:07:23.668 Fixed Capacity Management: Not Supported 00:07:23.668 Variable Capacity Management: Not Supported 00:07:23.668 Delete Endurance Group: Not Supported 00:07:23.668 Delete NVM Set: Not Supported 00:07:23.668 Extended LBA Formats Supported: Supported 00:07:23.668 Flexible Data Placement Supported: Not Supported 00:07:23.668 00:07:23.668 Controller Memory Buffer Support 00:07:23.668 ================================ 00:07:23.668 Supported: No 00:07:23.668 00:07:23.668 Persistent Memory Region Support 00:07:23.668 ================================ 00:07:23.668 Supported: No 00:07:23.668 00:07:23.668 Admin Command Set Attributes 00:07:23.668 ============================ 00:07:23.668 Security Send/Receive: Not Supported 00:07:23.668 Format NVM: Supported 00:07:23.668 Firmware Activate/Download: Not Supported 00:07:23.668 Namespace Management: Supported 00:07:23.668 Device Self-Test: Not Supported 00:07:23.668 Directives: Supported 00:07:23.668 NVMe-MI: Not Supported 00:07:23.668 Virtualization Management: Not Supported 00:07:23.668 Doorbell Buffer Config: Supported 00:07:23.668 Get LBA Status Capability: Not Supported 00:07:23.668 Command & Feature Lockdown Capability: Not Supported 00:07:23.668 Abort Command Limit: 4 00:07:23.668 Async Event Request Limit: 4 00:07:23.668 Number of Firmware Slots: N/A 00:07:23.668 Firmware Slot 1 Read-Only: N/A 00:07:23.668 Firmware Activation Without Reset: N/A 00:07:23.668 Multiple Update Detection Support: N/A 00:07:23.668 Firmware Update Granularity: No Information Provided 00:07:23.668 Per-Namespace SMART Log: Yes 00:07:23.668 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.668 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:23.668 Command Effects Log Page: Supported 00:07:23.668 Get Log Page Extended Data: Supported 00:07:23.668 Telemetry Log Pages: Not Supported 00:07:23.668 Persistent Event Log Pages: Not Supported 00:07:23.668 Supported Log Pages Log Page: May Support 00:07:23.668 Commands Supported & Effects Log Page: Not Supported 00:07:23.668 Feature Identifiers & Effects Log Page:May Support 00:07:23.668 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.668 Data Area 4 for Telemetry Log: Not Supported 00:07:23.668 Error Log Page Entries Supported: 1 00:07:23.668 Keep Alive: Not Supported 00:07:23.668 00:07:23.668 NVM Command Set Attributes 00:07:23.669 ========================== 00:07:23.669 Submission Queue Entry Size 00:07:23.669 Max: 64 00:07:23.669 Min: 64 00:07:23.669 Completion Queue Entry Size 00:07:23.669 Max: 16 00:07:23.669 Min: 16 00:07:23.669 Number of Namespaces: 256 00:07:23.669 Compare Command: Supported 00:07:23.669 Write Uncorrectable Command: Not Supported 00:07:23.669 Dataset Management Command: Supported 00:07:23.669 Write Zeroes Command: Supported 00:07:23.669 Set Features Save Field: Supported 00:07:23.669 Reservations: Not Supported 00:07:23.669 Timestamp: Supported 00:07:23.669 Copy: Supported 00:07:23.669 Volatile Write Cache: Present 00:07:23.669 Atomic Write Unit (Normal): 1 00:07:23.669 Atomic Write Unit (PFail): 1 00:07:23.669 Atomic Compare & Write Unit: 1 00:07:23.669 Fused Compare & Write: Not Supported 00:07:23.669 Scatter-Gather List 00:07:23.669 SGL Command Set: Supported 00:07:23.669 SGL Keyed: Not Supported 00:07:23.669 SGL Bit Bucket Descriptor: Not Supported 00:07:23.669 SGL Metadata Pointer: Not Supported 00:07:23.669 Oversized SGL: Not Supported 00:07:23.669 SGL Metadata Address: Not Supported 00:07:23.669 SGL Offset: Not Supported 00:07:23.669 Transport SGL Data Block: Not Supported 00:07:23.669 Replay Protected Memory Block: Not Supported 00:07:23.669 00:07:23.669 Firmware Slot Information 00:07:23.669 ========================= 00:07:23.669 Active slot: 1 00:07:23.669 Slot 1 Firmware Revision: 1.0 00:07:23.669 00:07:23.669 00:07:23.669 Commands Supported and Effects 00:07:23.669 ============================== 00:07:23.669 Admin Commands 00:07:23.669 -------------- 00:07:23.669 Delete I/O Submission Queue (00h): Supported 00:07:23.669 Create I/O Submission Queue (01h): Supported 00:07:23.669 Get Log Page (02h): Supported 00:07:23.669 Delete I/O Completion Queue (04h): Supported 00:07:23.669 Create I/O Completion Queue (05h): Supported 00:07:23.669 Identify (06h): Supported 00:07:23.669 Abort (08h): Supported 00:07:23.669 Set Features (09h): Supported 00:07:23.669 Get Features (0Ah): Supported 00:07:23.669 Asynchronous Event Request (0Ch): Supported 00:07:23.669 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.669 Directive Send (19h): Supported 00:07:23.669 Directive Receive (1Ah): Supported 00:07:23.669 Virtualization Management (1Ch): Supported 00:07:23.669 Doorbell Buffer Config (7Ch): Supported 00:07:23.669 Format NVM (80h): Supported LBA-Change 00:07:23.669 I/O Commands 00:07:23.669 ------------ 00:07:23.669 Flush (00h): Supported LBA-Change 00:07:23.669 Write (01h): Supported LBA-Change 00:07:23.669 Read (02h): Supported 00:07:23.669 Compare (05h): Supported 00:07:23.669 Write Zeroes (08h): Supported LBA-Change 00:07:23.669 Dataset Management (09h): Supported LBA-Change 00:07:23.669 Unknown (0Ch): Supported 00:07:23.669 Unknown (12h): Supported 00:07:23.669 Copy (19h): Supported LBA-Change 00:07:23.669 Unknown (1Dh): Supported LBA-Change 00:07:23.669 00:07:23.669 Error Log 00:07:23.669 ========= 00:07:23.669 00:07:23.669 Arbitration 00:07:23.669 =========== 00:07:23.669 Arbitration Burst: no limit 00:07:23.669 00:07:23.669 Power Management 00:07:23.669 ================ 00:07:23.669 Number of Power States: 1 00:07:23.669 Current Power State: Power State #0 00:07:23.669 Power State #0: 00:07:23.669 Max Power: 25.00 W 00:07:23.669 Non-Operational State: Operational 00:07:23.669 Entry Latency: 16 microseconds 00:07:23.669 Exit Latency: 4 microseconds 00:07:23.669 Relative Read Throughput: 0 00:07:23.669 Relative Read Latency: 0 00:07:23.669 Relative Write Throughput: 0 00:07:23.669 Relative Write Latency: 0 00:07:23.669 Idle Power[2024-11-26 13:20:12.160415] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62779 terminated unexpected 00:07:23.669 [2024-11-26 13:20:12.161335] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62779 terminated unexpected 00:07:23.669 : Not Reported 00:07:23.669 Active Power: Not Reported 00:07:23.669 Non-Operational Permissive Mode: Not Supported 00:07:23.669 00:07:23.669 Health Information 00:07:23.669 ================== 00:07:23.669 Critical Warnings: 00:07:23.669 Available Spare Space: OK 00:07:23.669 Temperature: OK 00:07:23.669 Device Reliability: OK 00:07:23.669 Read Only: No 00:07:23.669 Volatile Memory Backup: OK 00:07:23.669 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.669 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.669 Available Spare: 0% 00:07:23.669 Available Spare Threshold: 0% 00:07:23.669 Life Percentage Used: 0% 00:07:23.669 Data Units Read: 738 00:07:23.669 Data Units Written: 666 00:07:23.669 Host Read Commands: 38426 00:07:23.669 Host Write Commands: 38212 00:07:23.669 Controller Busy Time: 0 minutes 00:07:23.669 Power Cycles: 0 00:07:23.669 Power On Hours: 0 hours 00:07:23.669 Unsafe Shutdowns: 0 00:07:23.669 Unrecoverable Media Errors: 0 00:07:23.669 Lifetime Error Log Entries: 0 00:07:23.669 Warning Temperature Time: 0 minutes 00:07:23.669 Critical Temperature Time: 0 minutes 00:07:23.669 00:07:23.669 Number of Queues 00:07:23.669 ================ 00:07:23.669 Number of I/O Submission Queues: 64 00:07:23.669 Number of I/O Completion Queues: 64 00:07:23.669 00:07:23.669 ZNS Specific Controller Data 00:07:23.669 ============================ 00:07:23.669 Zone Append Size Limit: 0 00:07:23.669 00:07:23.669 00:07:23.669 Active Namespaces 00:07:23.669 ================= 00:07:23.669 Namespace ID:1 00:07:23.669 Error Recovery Timeout: Unlimited 00:07:23.669 Command Set Identifier: NVM (00h) 00:07:23.669 Deallocate: Supported 00:07:23.669 Deallocated/Unwritten Error: Supported 00:07:23.669 Deallocated Read Value: All 0x00 00:07:23.669 Deallocate in Write Zeroes: Not Supported 00:07:23.669 Deallocated Guard Field: 0xFFFF 00:07:23.669 Flush: Supported 00:07:23.669 Reservation: Not Supported 00:07:23.669 Metadata Transferred as: Separate Metadata Buffer 00:07:23.669 Namespace Sharing Capabilities: Private 00:07:23.669 Size (in LBAs): 1548666 (5GiB) 00:07:23.669 Capacity (in LBAs): 1548666 (5GiB) 00:07:23.669 Utilization (in LBAs): 1548666 (5GiB) 00:07:23.669 Thin Provisioning: Not Supported 00:07:23.669 Per-NS Atomic Units: No 00:07:23.669 Maximum Single Source Range Length: 128 00:07:23.669 Maximum Copy Length: 128 00:07:23.669 Maximum Source Range Count: 128 00:07:23.669 NGUID/EUI64 Never Reused: No 00:07:23.669 Namespace Write Protected: No 00:07:23.669 Number of LBA Formats: 8 00:07:23.669 Current LBA Format: LBA Format #07 00:07:23.669 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.669 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.669 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.669 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.669 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.669 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.669 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.669 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.669 00:07:23.669 NVM Specific Namespace Data 00:07:23.669 =========================== 00:07:23.669 Logical Block Storage Tag Mask: 0 00:07:23.669 Protection Information Capabilities: 00:07:23.669 16b Guard Protection Information Storage Tag Support: No 00:07:23.669 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.669 Storage Tag Check Read Support: No 00:07:23.669 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.669 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.669 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.669 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.669 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.669 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.669 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.669 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.669 ===================================================== 00:07:23.669 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:23.669 ===================================================== 00:07:23.669 Controller Capabilities/Features 00:07:23.669 ================================ 00:07:23.669 Vendor ID: 1b36 00:07:23.669 Subsystem Vendor ID: 1af4 00:07:23.669 Serial Number: 12341 00:07:23.669 Model Number: QEMU NVMe Ctrl 00:07:23.669 Firmware Version: 8.0.0 00:07:23.669 Recommended Arb Burst: 6 00:07:23.670 IEEE OUI Identifier: 00 54 52 00:07:23.670 Multi-path I/O 00:07:23.670 May have multiple subsystem ports: No 00:07:23.670 May have multiple controllers: No 00:07:23.670 Associated with SR-IOV VF: No 00:07:23.670 Max Data Transfer Size: 524288 00:07:23.670 Max Number of Namespaces: 256 00:07:23.670 Max Number of I/O Queues: 64 00:07:23.670 NVMe Specification Version (VS): 1.4 00:07:23.670 NVMe Specification Version (Identify): 1.4 00:07:23.670 Maximum Queue Entries: 2048 00:07:23.670 Contiguous Queues Required: Yes 00:07:23.670 Arbitration Mechanisms Supported 00:07:23.670 Weighted Round Robin: Not Supported 00:07:23.670 Vendor Specific: Not Supported 00:07:23.670 Reset Timeout: 7500 ms 00:07:23.670 Doorbell Stride: 4 bytes 00:07:23.670 NVM Subsystem Reset: Not Supported 00:07:23.670 Command Sets Supported 00:07:23.670 NVM Command Set: Supported 00:07:23.670 Boot Partition: Not Supported 00:07:23.670 Memory Page Size Minimum: 4096 bytes 00:07:23.670 Memory Page Size Maximum: 65536 bytes 00:07:23.670 Persistent Memory Region: Not Supported 00:07:23.670 Optional Asynchronous Events Supported 00:07:23.670 Namespace Attribute Notices: Supported 00:07:23.670 Firmware Activation Notices: Not Supported 00:07:23.670 ANA Change Notices: Not Supported 00:07:23.670 PLE Aggregate Log Change Notices: Not Supported 00:07:23.670 LBA Status Info Alert Notices: Not Supported 00:07:23.670 EGE Aggregate Log Change Notices: Not Supported 00:07:23.670 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.670 Zone Descriptor Change Notices: Not Supported 00:07:23.670 Discovery Log Change Notices: Not Supported 00:07:23.670 Controller Attributes 00:07:23.670 128-bit Host Identifier: Not Supported 00:07:23.670 Non-Operational Permissive Mode: Not Supported 00:07:23.670 NVM Sets: Not Supported 00:07:23.670 Read Recovery Levels: Not Supported 00:07:23.670 Endurance Groups: Not Supported 00:07:23.670 Predictable Latency Mode: Not Supported 00:07:23.670 Traffic Based Keep ALive: Not Supported 00:07:23.670 Namespace Granularity: Not Supported 00:07:23.670 SQ Associations: Not Supported 00:07:23.670 UUID List: Not Supported 00:07:23.670 Multi-Domain Subsystem: Not Supported 00:07:23.670 Fixed Capacity Management: Not Supported 00:07:23.670 Variable Capacity Management: Not Supported 00:07:23.670 Delete Endurance Group: Not Supported 00:07:23.670 Delete NVM Set: Not Supported 00:07:23.670 Extended LBA Formats Supported: Supported 00:07:23.670 Flexible Data Placement Supported: Not Supported 00:07:23.670 00:07:23.670 Controller Memory Buffer Support 00:07:23.670 ================================ 00:07:23.670 Supported: No 00:07:23.670 00:07:23.670 Persistent Memory Region Support 00:07:23.670 ================================ 00:07:23.670 Supported: No 00:07:23.670 00:07:23.670 Admin Command Set Attributes 00:07:23.670 ============================ 00:07:23.670 Security Send/Receive: Not Supported 00:07:23.670 Format NVM: Supported 00:07:23.670 Firmware Activate/Download: Not Supported 00:07:23.670 Namespace Management: Supported 00:07:23.670 Device Self-Test: Not Supported 00:07:23.670 Directives: Supported 00:07:23.670 NVMe-MI: Not Supported 00:07:23.670 Virtualization Management: Not Supported 00:07:23.670 Doorbell Buffer Config: Supported 00:07:23.670 Get LBA Status Capability: Not Supported 00:07:23.670 Command & Feature Lockdown Capability: Not Supported 00:07:23.670 Abort Command Limit: 4 00:07:23.670 Async Event Request Limit: 4 00:07:23.670 Number of Firmware Slots: N/A 00:07:23.670 Firmware Slot 1 Read-Only: N/A 00:07:23.670 Firmware Activation Without Reset: N/A 00:07:23.670 Multiple Update Detection Support: N/A 00:07:23.670 Firmware Update Granularity: No Information Provided 00:07:23.670 Per-Namespace SMART Log: Yes 00:07:23.670 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.670 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:23.670 Command Effects Log Page: Supported 00:07:23.670 Get Log Page Extended Data: Supported 00:07:23.670 Telemetry Log Pages: Not Supported 00:07:23.670 Persistent Event Log Pages: Not Supported 00:07:23.670 Supported Log Pages Log Page: May Support 00:07:23.670 Commands Supported & Effects Log Page: Not Supported 00:07:23.670 Feature Identifiers & Effects Log Page:May Support 00:07:23.670 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.670 Data Area 4 for Telemetry Log: Not Supported 00:07:23.670 Error Log Page Entries Supported: 1 00:07:23.670 Keep Alive: Not Supported 00:07:23.670 00:07:23.670 NVM Command Set Attributes 00:07:23.670 ========================== 00:07:23.670 Submission Queue Entry Size 00:07:23.670 Max: 64 00:07:23.670 Min: 64 00:07:23.670 Completion Queue Entry Size 00:07:23.670 Max: 16 00:07:23.670 Min: 16 00:07:23.670 Number of Namespaces: 256 00:07:23.670 Compare Command: Supported 00:07:23.670 Write Uncorrectable Command: Not Supported 00:07:23.670 Dataset Management Command: Supported 00:07:23.670 Write Zeroes Command: Supported 00:07:23.670 Set Features Save Field: Supported 00:07:23.670 Reservations: Not Supported 00:07:23.670 Timestamp: Supported 00:07:23.670 Copy: Supported 00:07:23.670 Volatile Write Cache: Present 00:07:23.670 Atomic Write Unit (Normal): 1 00:07:23.670 Atomic Write Unit (PFail): 1 00:07:23.670 Atomic Compare & Write Unit: 1 00:07:23.670 Fused Compare & Write: Not Supported 00:07:23.670 Scatter-Gather List 00:07:23.670 SGL Command Set: Supported 00:07:23.670 SGL Keyed: Not Supported 00:07:23.670 SGL Bit Bucket Descriptor: Not Supported 00:07:23.670 SGL Metadata Pointer: Not Supported 00:07:23.670 Oversized SGL: Not Supported 00:07:23.670 SGL Metadata Address: Not Supported 00:07:23.670 SGL Offset: Not Supported 00:07:23.670 Transport SGL Data Block: Not Supported 00:07:23.670 Replay Protected Memory Block: Not Supported 00:07:23.670 00:07:23.670 Firmware Slot Information 00:07:23.670 ========================= 00:07:23.670 Active slot: 1 00:07:23.670 Slot 1 Firmware Revision: 1.0 00:07:23.670 00:07:23.670 00:07:23.670 Commands Supported and Effects 00:07:23.670 ============================== 00:07:23.670 Admin Commands 00:07:23.670 -------------- 00:07:23.670 Delete I/O Submission Queue (00h): Supported 00:07:23.670 Create I/O Submission Queue (01h): Supported 00:07:23.670 Get Log Page (02h): Supported 00:07:23.670 Delete I/O Completion Queue (04h): Supported 00:07:23.670 Create I/O Completion Queue (05h): Supported 00:07:23.670 Identify (06h): Supported 00:07:23.670 Abort (08h): Supported 00:07:23.670 Set Features (09h): Supported 00:07:23.670 Get Features (0Ah): Supported 00:07:23.670 Asynchronous Event Request (0Ch): Supported 00:07:23.670 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.670 Directive Send (19h): Supported 00:07:23.670 Directive Receive (1Ah): Supported 00:07:23.670 Virtualization Management (1Ch): Supported 00:07:23.670 Doorbell Buffer Config (7Ch): Supported 00:07:23.670 Format NVM (80h): Supported LBA-Change 00:07:23.670 I/O Commands 00:07:23.670 ------------ 00:07:23.670 Flush (00h): Supported LBA-Change 00:07:23.670 Write (01h): Supported LBA-Change 00:07:23.670 Read (02h): Supported 00:07:23.670 Compare (05h): Supported 00:07:23.670 Write Zeroes (08h): Supported LBA-Change 00:07:23.670 Dataset Management (09h): Supported LBA-Change 00:07:23.670 Unknown (0Ch): Supported 00:07:23.670 Unknown (12h): Supported 00:07:23.670 Copy (19h): Supported LBA-Change 00:07:23.670 Unknown (1Dh): Supported LBA-Change 00:07:23.670 00:07:23.670 Error Log 00:07:23.670 ========= 00:07:23.670 00:07:23.670 Arbitration 00:07:23.670 =========== 00:07:23.670 Arbitration Burst: no limit 00:07:23.670 00:07:23.670 Power Management 00:07:23.670 ================ 00:07:23.670 Number of Power States: 1 00:07:23.670 Current Power State: Power State #0 00:07:23.670 Power State #0: 00:07:23.670 Max Power: 25.00 W 00:07:23.670 Non-Operational State: Operational 00:07:23.670 Entry Latency: 16 microseconds 00:07:23.670 Exit Latency: 4 microseconds 00:07:23.670 Relative Read Throughput: 0 00:07:23.670 Relative Read Latency: 0 00:07:23.670 Relative Write Throughput: 0 00:07:23.670 Relative Write Latency: 0 00:07:23.670 Idle Power: Not Reported 00:07:23.670 Active Power: Not Reported 00:07:23.670 Non-Operational Permissive Mode: Not Supported 00:07:23.670 00:07:23.670 Health Information 00:07:23.670 ================== 00:07:23.670 Critical Warnings: 00:07:23.670 Available Spare Space: OK 00:07:23.670 Temperature: [2024-11-26 13:20:12.162859] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62779 terminated unexpected 00:07:23.670 OK 00:07:23.670 Device Reliability: OK 00:07:23.670 Read Only: No 00:07:23.670 Volatile Memory Backup: OK 00:07:23.670 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.670 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.670 Available Spare: 0% 00:07:23.670 Available Spare Threshold: 0% 00:07:23.671 Life Percentage Used: 0% 00:07:23.671 Data Units Read: 1128 00:07:23.671 Data Units Written: 995 00:07:23.671 Host Read Commands: 56861 00:07:23.671 Host Write Commands: 55665 00:07:23.671 Controller Busy Time: 0 minutes 00:07:23.671 Power Cycles: 0 00:07:23.671 Power On Hours: 0 hours 00:07:23.671 Unsafe Shutdowns: 0 00:07:23.671 Unrecoverable Media Errors: 0 00:07:23.671 Lifetime Error Log Entries: 0 00:07:23.671 Warning Temperature Time: 0 minutes 00:07:23.671 Critical Temperature Time: 0 minutes 00:07:23.671 00:07:23.671 Number of Queues 00:07:23.671 ================ 00:07:23.671 Number of I/O Submission Queues: 64 00:07:23.671 Number of I/O Completion Queues: 64 00:07:23.671 00:07:23.671 ZNS Specific Controller Data 00:07:23.671 ============================ 00:07:23.671 Zone Append Size Limit: 0 00:07:23.671 00:07:23.671 00:07:23.671 Active Namespaces 00:07:23.671 ================= 00:07:23.671 Namespace ID:1 00:07:23.671 Error Recovery Timeout: Unlimited 00:07:23.671 Command Set Identifier: NVM (00h) 00:07:23.671 Deallocate: Supported 00:07:23.671 Deallocated/Unwritten Error: Supported 00:07:23.671 Deallocated Read Value: All 0x00 00:07:23.671 Deallocate in Write Zeroes: Not Supported 00:07:23.671 Deallocated Guard Field: 0xFFFF 00:07:23.671 Flush: Supported 00:07:23.671 Reservation: Not Supported 00:07:23.671 Namespace Sharing Capabilities: Private 00:07:23.671 Size (in LBAs): 1310720 (5GiB) 00:07:23.671 Capacity (in LBAs): 1310720 (5GiB) 00:07:23.671 Utilization (in LBAs): 1310720 (5GiB) 00:07:23.671 Thin Provisioning: Not Supported 00:07:23.671 Per-NS Atomic Units: No 00:07:23.671 Maximum Single Source Range Length: 128 00:07:23.671 Maximum Copy Length: 128 00:07:23.671 Maximum Source Range Count: 128 00:07:23.671 NGUID/EUI64 Never Reused: No 00:07:23.671 Namespace Write Protected: No 00:07:23.671 Number of LBA Formats: 8 00:07:23.671 Current LBA Format: LBA Format #04 00:07:23.671 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.671 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.671 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.671 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.671 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.671 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.671 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.671 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.671 00:07:23.671 NVM Specific Namespace Data 00:07:23.671 =========================== 00:07:23.671 Logical Block Storage Tag Mask: 0 00:07:23.671 Protection Information Capabilities: 00:07:23.671 16b Guard Protection Information Storage Tag Support: No 00:07:23.671 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.671 Storage Tag Check Read Support: No 00:07:23.671 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.671 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.671 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.671 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.671 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.671 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.671 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.671 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.671 ===================================================== 00:07:23.671 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:23.671 ===================================================== 00:07:23.671 Controller Capabilities/Features 00:07:23.671 ================================ 00:07:23.671 Vendor ID: 1b36 00:07:23.671 Subsystem Vendor ID: 1af4 00:07:23.671 Serial Number: 12343 00:07:23.671 Model Number: QEMU NVMe Ctrl 00:07:23.671 Firmware Version: 8.0.0 00:07:23.671 Recommended Arb Burst: 6 00:07:23.671 IEEE OUI Identifier: 00 54 52 00:07:23.671 Multi-path I/O 00:07:23.671 May have multiple subsystem ports: No 00:07:23.671 May have multiple controllers: Yes 00:07:23.671 Associated with SR-IOV VF: No 00:07:23.671 Max Data Transfer Size: 524288 00:07:23.671 Max Number of Namespaces: 256 00:07:23.671 Max Number of I/O Queues: 64 00:07:23.671 NVMe Specification Version (VS): 1.4 00:07:23.671 NVMe Specification Version (Identify): 1.4 00:07:23.671 Maximum Queue Entries: 2048 00:07:23.671 Contiguous Queues Required: Yes 00:07:23.671 Arbitration Mechanisms Supported 00:07:23.671 Weighted Round Robin: Not Supported 00:07:23.671 Vendor Specific: Not Supported 00:07:23.671 Reset Timeout: 7500 ms 00:07:23.671 Doorbell Stride: 4 bytes 00:07:23.671 NVM Subsystem Reset: Not Supported 00:07:23.671 Command Sets Supported 00:07:23.671 NVM Command Set: Supported 00:07:23.671 Boot Partition: Not Supported 00:07:23.671 Memory Page Size Minimum: 4096 bytes 00:07:23.671 Memory Page Size Maximum: 65536 bytes 00:07:23.671 Persistent Memory Region: Not Supported 00:07:23.671 Optional Asynchronous Events Supported 00:07:23.671 Namespace Attribute Notices: Supported 00:07:23.671 Firmware Activation Notices: Not Supported 00:07:23.671 ANA Change Notices: Not Supported 00:07:23.671 PLE Aggregate Log Change Notices: Not Supported 00:07:23.671 LBA Status Info Alert Notices: Not Supported 00:07:23.671 EGE Aggregate Log Change Notices: Not Supported 00:07:23.671 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.671 Zone Descriptor Change Notices: Not Supported 00:07:23.671 Discovery Log Change Notices: Not Supported 00:07:23.671 Controller Attributes 00:07:23.671 128-bit Host Identifier: Not Supported 00:07:23.671 Non-Operational Permissive Mode: Not Supported 00:07:23.671 NVM Sets: Not Supported 00:07:23.671 Read Recovery Levels: Not Supported 00:07:23.671 Endurance Groups: Supported 00:07:23.671 Predictable Latency Mode: Not Supported 00:07:23.671 Traffic Based Keep ALive: Not Supported 00:07:23.671 Namespace Granularity: Not Supported 00:07:23.671 SQ Associations: Not Supported 00:07:23.671 UUID List: Not Supported 00:07:23.671 Multi-Domain Subsystem: Not Supported 00:07:23.671 Fixed Capacity Management: Not Supported 00:07:23.671 Variable Capacity Management: Not Supported 00:07:23.671 Delete Endurance Group: Not Supported 00:07:23.671 Delete NVM Set: Not Supported 00:07:23.671 Extended LBA Formats Supported: Supported 00:07:23.671 Flexible Data Placement Supported: Supported 00:07:23.671 00:07:23.671 Controller Memory Buffer Support 00:07:23.671 ================================ 00:07:23.671 Supported: No 00:07:23.671 00:07:23.671 Persistent Memory Region Support 00:07:23.671 ================================ 00:07:23.671 Supported: No 00:07:23.671 00:07:23.671 Admin Command Set Attributes 00:07:23.671 ============================ 00:07:23.671 Security Send/Receive: Not Supported 00:07:23.671 Format NVM: Supported 00:07:23.671 Firmware Activate/Download: Not Supported 00:07:23.671 Namespace Management: Supported 00:07:23.671 Device Self-Test: Not Supported 00:07:23.671 Directives: Supported 00:07:23.671 NVMe-MI: Not Supported 00:07:23.671 Virtualization Management: Not Supported 00:07:23.671 Doorbell Buffer Config: Supported 00:07:23.671 Get LBA Status Capability: Not Supported 00:07:23.671 Command & Feature Lockdown Capability: Not Supported 00:07:23.671 Abort Command Limit: 4 00:07:23.671 Async Event Request Limit: 4 00:07:23.671 Number of Firmware Slots: N/A 00:07:23.671 Firmware Slot 1 Read-Only: N/A 00:07:23.671 Firmware Activation Without Reset: N/A 00:07:23.671 Multiple Update Detection Support: N/A 00:07:23.671 Firmware Update Granularity: No Information Provided 00:07:23.671 Per-Namespace SMART Log: Yes 00:07:23.671 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.671 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:23.671 Command Effects Log Page: Supported 00:07:23.672 Get Log Page Extended Data: Supported 00:07:23.672 Telemetry Log Pages: Not Supported 00:07:23.672 Persistent Event Log Pages: Not Supported 00:07:23.672 Supported Log Pages Log Page: May Support 00:07:23.672 Commands Supported & Effects Log Page: Not Supported 00:07:23.672 Feature Identifiers & Effects Log Page:May Support 00:07:23.672 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.672 Data Area 4 for Telemetry Log: Not Supported 00:07:23.672 Error Log Page Entries Supported: 1 00:07:23.672 Keep Alive: Not Supported 00:07:23.672 00:07:23.672 NVM Command Set Attributes 00:07:23.672 ========================== 00:07:23.672 Submission Queue Entry Size 00:07:23.672 Max: 64 00:07:23.672 Min: 64 00:07:23.672 Completion Queue Entry Size 00:07:23.672 Max: 16 00:07:23.672 Min: 16 00:07:23.672 Number of Namespaces: 256 00:07:23.672 Compare Command: Supported 00:07:23.672 Write Uncorrectable Command: Not Supported 00:07:23.672 Dataset Management Command: Supported 00:07:23.672 Write Zeroes Command: Supported 00:07:23.672 Set Features Save Field: Supported 00:07:23.672 Reservations: Not Supported 00:07:23.672 Timestamp: Supported 00:07:23.672 Copy: Supported 00:07:23.672 Volatile Write Cache: Present 00:07:23.672 Atomic Write Unit (Normal): 1 00:07:23.672 Atomic Write Unit (PFail): 1 00:07:23.672 Atomic Compare & Write Unit: 1 00:07:23.672 Fused Compare & Write: Not Supported 00:07:23.672 Scatter-Gather List 00:07:23.672 SGL Command Set: Supported 00:07:23.672 SGL Keyed: Not Supported 00:07:23.672 SGL Bit Bucket Descriptor: Not Supported 00:07:23.672 SGL Metadata Pointer: Not Supported 00:07:23.672 Oversized SGL: Not Supported 00:07:23.672 SGL Metadata Address: Not Supported 00:07:23.672 SGL Offset: Not Supported 00:07:23.672 Transport SGL Data Block: Not Supported 00:07:23.672 Replay Protected Memory Block: Not Supported 00:07:23.672 00:07:23.672 Firmware Slot Information 00:07:23.672 ========================= 00:07:23.672 Active slot: 1 00:07:23.672 Slot 1 Firmware Revision: 1.0 00:07:23.672 00:07:23.672 00:07:23.672 Commands Supported and Effects 00:07:23.672 ============================== 00:07:23.672 Admin Commands 00:07:23.672 -------------- 00:07:23.672 Delete I/O Submission Queue (00h): Supported 00:07:23.672 Create I/O Submission Queue (01h): Supported 00:07:23.672 Get Log Page (02h): Supported 00:07:23.672 Delete I/O Completion Queue (04h): Supported 00:07:23.672 Create I/O Completion Queue (05h): Supported 00:07:23.672 Identify (06h): Supported 00:07:23.672 Abort (08h): Supported 00:07:23.672 Set Features (09h): Supported 00:07:23.672 Get Features (0Ah): Supported 00:07:23.672 Asynchronous Event Request (0Ch): Supported 00:07:23.672 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.672 Directive Send (19h): Supported 00:07:23.672 Directive Receive (1Ah): Supported 00:07:23.672 Virtualization Management (1Ch): Supported 00:07:23.672 Doorbell Buffer Config (7Ch): Supported 00:07:23.672 Format NVM (80h): Supported LBA-Change 00:07:23.672 I/O Commands 00:07:23.672 ------------ 00:07:23.672 Flush (00h): Supported LBA-Change 00:07:23.672 Write (01h): Supported LBA-Change 00:07:23.672 Read (02h): Supported 00:07:23.672 Compare (05h): Supported 00:07:23.672 Write Zeroes (08h): Supported LBA-Change 00:07:23.672 Dataset Management (09h): Supported LBA-Change 00:07:23.672 Unknown (0Ch): Supported 00:07:23.672 Unknown (12h): Supported 00:07:23.672 Copy (19h): Supported LBA-Change 00:07:23.672 Unknown (1Dh): Supported LBA-Change 00:07:23.672 00:07:23.672 Error Log 00:07:23.672 ========= 00:07:23.672 00:07:23.672 Arbitration 00:07:23.672 =========== 00:07:23.672 Arbitration Burst: no limit 00:07:23.672 00:07:23.672 Power Management 00:07:23.672 ================ 00:07:23.672 Number of Power States: 1 00:07:23.672 Current Power State: Power State #0 00:07:23.672 Power State #0: 00:07:23.672 Max Power: 25.00 W 00:07:23.672 Non-Operational State: Operational 00:07:23.672 Entry Latency: 16 microseconds 00:07:23.672 Exit Latency: 4 microseconds 00:07:23.672 Relative Read Throughput: 0 00:07:23.672 Relative Read Latency: 0 00:07:23.672 Relative Write Throughput: 0 00:07:23.672 Relative Write Latency: 0 00:07:23.672 Idle Power: Not Reported 00:07:23.672 Active Power: Not Reported 00:07:23.672 Non-Operational Permissive Mode: Not Supported 00:07:23.672 00:07:23.672 Health Information 00:07:23.672 ================== 00:07:23.672 Critical Warnings: 00:07:23.672 Available Spare Space: OK 00:07:23.672 Temperature: OK 00:07:23.672 Device Reliability: OK 00:07:23.672 Read Only: No 00:07:23.672 Volatile Memory Backup: OK 00:07:23.672 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.672 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.672 Available Spare: 0% 00:07:23.672 Available Spare Threshold: 0% 00:07:23.672 Life Percentage Used: 0% 00:07:23.672 Data Units Read: 905 00:07:23.672 Data Units Written: 834 00:07:23.672 Host Read Commands: 39970 00:07:23.672 Host Write Commands: 39393 00:07:23.672 Controller Busy Time: 0 minutes 00:07:23.672 Power Cycles: 0 00:07:23.672 Power On Hours: 0 hours 00:07:23.672 Unsafe Shutdowns: 0 00:07:23.672 Unrecoverable Media Errors: 0 00:07:23.672 Lifetime Error Log Entries: 0 00:07:23.672 Warning Temperature Time: 0 minutes 00:07:23.672 Critical Temperature Time: 0 minutes 00:07:23.672 00:07:23.672 Number of Queues 00:07:23.672 ================ 00:07:23.672 Number of I/O Submission Queues: 64 00:07:23.672 Number of I/O Completion Queues: 64 00:07:23.672 00:07:23.672 ZNS Specific Controller Data 00:07:23.672 ============================ 00:07:23.672 Zone Append Size Limit: 0 00:07:23.672 00:07:23.672 00:07:23.672 Active Namespaces 00:07:23.672 ================= 00:07:23.672 Namespace ID:1 00:07:23.672 Error Recovery Timeout: Unlimited 00:07:23.672 Command Set Identifier: NVM (00h) 00:07:23.672 Deallocate: Supported 00:07:23.672 Deallocated/Unwritten Error: Supported 00:07:23.672 Deallocated Read Value: All 0x00 00:07:23.672 Deallocate in Write Zeroes: Not Supported 00:07:23.672 Deallocated Guard Field: 0xFFFF 00:07:23.672 Flush: Supported 00:07:23.672 Reservation: Not Supported 00:07:23.672 Namespace Sharing Capabilities: Multiple Controllers 00:07:23.672 Size (in LBAs): 262144 (1GiB) 00:07:23.672 Capacity (in LBAs): 262144 (1GiB) 00:07:23.672 Utilization (in LBAs): 262144 (1GiB) 00:07:23.672 Thin Provisioning: Not Supported 00:07:23.672 Per-NS Atomic Units: No 00:07:23.672 Maximum Single Source Range Length: 128 00:07:23.672 Maximum Copy Length: 128 00:07:23.672 Maximum Source Range Count: 128 00:07:23.672 NGUID/EUI64 Never Reused: No 00:07:23.672 Namespace Write Protected: No 00:07:23.672 Endurance group ID: 1 00:07:23.672 Number of LBA Formats: 8 00:07:23.672 Current LBA Format: LBA Format #04 00:07:23.672 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.672 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.672 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.672 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.672 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.672 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.672 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.672 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.672 00:07:23.672 Get Feature FDP: 00:07:23.672 ================ 00:07:23.672 Enabled: Yes 00:07:23.672 FDP configuration index: 0 00:07:23.672 00:07:23.672 FDP configurations log page 00:07:23.672 =========================== 00:07:23.672 Number of FDP configurations: 1 00:07:23.672 Version: 0 00:07:23.672 Size: 112 00:07:23.672 FDP Configuration Descriptor: 0 00:07:23.672 Descriptor Size: 96 00:07:23.672 Reclaim Group Identifier format: 2 00:07:23.672 FDP Volatile Write Cache: Not Present 00:07:23.672 FDP Configuration: Valid 00:07:23.672 Vendor Specific Size: 0 00:07:23.672 Number of Reclaim Groups: 2 00:07:23.672 Number of Recalim Unit Handles: 8 00:07:23.672 Max Placement Identifiers: 128 00:07:23.672 Number of Namespaces Suppprted: 256 00:07:23.672 Reclaim unit Nominal Size: 6000000 bytes 00:07:23.672 Estimated Reclaim Unit Time Limit: Not Reported 00:07:23.672 RUH Desc #000: RUH Type: Initially Isolated 00:07:23.672 RUH Desc #001: RUH Type: Initially Isolated 00:07:23.672 RUH Desc #002: RUH Type: Initially Isolated 00:07:23.673 RUH Desc #003: RUH Type: Initially Isolated 00:07:23.673 RUH Desc #004: RUH Type: Initially Isolated 00:07:23.673 RUH Desc #005: RUH Type: Initially Isolated 00:07:23.673 RUH Desc #006: RUH Type: Initially Isolated 00:07:23.673 RUH Desc #007: RUH Type: Initially Isolated 00:07:23.673 00:07:23.673 FDP reclaim unit handle usage log page 00:07:23.673 ====================================== 00:07:23.673 Number of Reclaim Unit Handles: 8 00:07:23.673 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:23.673 RUH Usage Desc #001: RUH Attributes: Unused 00:07:23.673 RUH Usage Desc #002: RUH Attributes: Unused 00:07:23.673 RUH Usage Desc #003: RUH Attributes: Unused 00:07:23.673 RUH Usage Desc #004: RUH Attributes: Unused 00:07:23.673 RUH Usage Desc #005: RUH Attributes: Unused 00:07:23.673 RUH Usage Desc #006: RUH Attributes: Unused 00:07:23.673 RUH Usage Desc #007: RUH Attributes: Unused 00:07:23.673 00:07:23.673 FDP statistics log page 00:07:23.673 ======================= 00:07:23.673 Host bytes with metadata written: 539140096 00:07:23.673 Medi[2024-11-26 13:20:12.164433] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62779 terminated unexpected 00:07:23.673 a bytes with metadata written: 539197440 00:07:23.673 Media bytes erased: 0 00:07:23.673 00:07:23.673 FDP events log page 00:07:23.673 =================== 00:07:23.673 Number of FDP events: 0 00:07:23.673 00:07:23.673 NVM Specific Namespace Data 00:07:23.673 =========================== 00:07:23.673 Logical Block Storage Tag Mask: 0 00:07:23.673 Protection Information Capabilities: 00:07:23.673 16b Guard Protection Information Storage Tag Support: No 00:07:23.673 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.673 Storage Tag Check Read Support: No 00:07:23.673 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.673 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.673 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.673 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.673 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.673 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.673 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.673 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.673 ===================================================== 00:07:23.673 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:23.673 ===================================================== 00:07:23.673 Controller Capabilities/Features 00:07:23.673 ================================ 00:07:23.673 Vendor ID: 1b36 00:07:23.673 Subsystem Vendor ID: 1af4 00:07:23.673 Serial Number: 12342 00:07:23.673 Model Number: QEMU NVMe Ctrl 00:07:23.673 Firmware Version: 8.0.0 00:07:23.673 Recommended Arb Burst: 6 00:07:23.673 IEEE OUI Identifier: 00 54 52 00:07:23.673 Multi-path I/O 00:07:23.673 May have multiple subsystem ports: No 00:07:23.673 May have multiple controllers: No 00:07:23.673 Associated with SR-IOV VF: No 00:07:23.673 Max Data Transfer Size: 524288 00:07:23.673 Max Number of Namespaces: 256 00:07:23.673 Max Number of I/O Queues: 64 00:07:23.673 NVMe Specification Version (VS): 1.4 00:07:23.673 NVMe Specification Version (Identify): 1.4 00:07:23.673 Maximum Queue Entries: 2048 00:07:23.673 Contiguous Queues Required: Yes 00:07:23.673 Arbitration Mechanisms Supported 00:07:23.673 Weighted Round Robin: Not Supported 00:07:23.673 Vendor Specific: Not Supported 00:07:23.673 Reset Timeout: 7500 ms 00:07:23.673 Doorbell Stride: 4 bytes 00:07:23.673 NVM Subsystem Reset: Not Supported 00:07:23.673 Command Sets Supported 00:07:23.673 NVM Command Set: Supported 00:07:23.673 Boot Partition: Not Supported 00:07:23.673 Memory Page Size Minimum: 4096 bytes 00:07:23.673 Memory Page Size Maximum: 65536 bytes 00:07:23.673 Persistent Memory Region: Not Supported 00:07:23.673 Optional Asynchronous Events Supported 00:07:23.673 Namespace Attribute Notices: Supported 00:07:23.673 Firmware Activation Notices: Not Supported 00:07:23.673 ANA Change Notices: Not Supported 00:07:23.673 PLE Aggregate Log Change Notices: Not Supported 00:07:23.673 LBA Status Info Alert Notices: Not Supported 00:07:23.673 EGE Aggregate Log Change Notices: Not Supported 00:07:23.673 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.673 Zone Descriptor Change Notices: Not Supported 00:07:23.673 Discovery Log Change Notices: Not Supported 00:07:23.673 Controller Attributes 00:07:23.673 128-bit Host Identifier: Not Supported 00:07:23.673 Non-Operational Permissive Mode: Not Supported 00:07:23.673 NVM Sets: Not Supported 00:07:23.673 Read Recovery Levels: Not Supported 00:07:23.673 Endurance Groups: Not Supported 00:07:23.673 Predictable Latency Mode: Not Supported 00:07:23.673 Traffic Based Keep ALive: Not Supported 00:07:23.673 Namespace Granularity: Not Supported 00:07:23.673 SQ Associations: Not Supported 00:07:23.673 UUID List: Not Supported 00:07:23.673 Multi-Domain Subsystem: Not Supported 00:07:23.673 Fixed Capacity Management: Not Supported 00:07:23.673 Variable Capacity Management: Not Supported 00:07:23.673 Delete Endurance Group: Not Supported 00:07:23.673 Delete NVM Set: Not Supported 00:07:23.673 Extended LBA Formats Supported: Supported 00:07:23.673 Flexible Data Placement Supported: Not Supported 00:07:23.673 00:07:23.673 Controller Memory Buffer Support 00:07:23.673 ================================ 00:07:23.673 Supported: No 00:07:23.673 00:07:23.673 Persistent Memory Region Support 00:07:23.673 ================================ 00:07:23.673 Supported: No 00:07:23.673 00:07:23.673 Admin Command Set Attributes 00:07:23.673 ============================ 00:07:23.673 Security Send/Receive: Not Supported 00:07:23.673 Format NVM: Supported 00:07:23.673 Firmware Activate/Download: Not Supported 00:07:23.673 Namespace Management: Supported 00:07:23.673 Device Self-Test: Not Supported 00:07:23.673 Directives: Supported 00:07:23.673 NVMe-MI: Not Supported 00:07:23.673 Virtualization Management: Not Supported 00:07:23.673 Doorbell Buffer Config: Supported 00:07:23.673 Get LBA Status Capability: Not Supported 00:07:23.673 Command & Feature Lockdown Capability: Not Supported 00:07:23.673 Abort Command Limit: 4 00:07:23.673 Async Event Request Limit: 4 00:07:23.673 Number of Firmware Slots: N/A 00:07:23.673 Firmware Slot 1 Read-Only: N/A 00:07:23.673 Firmware Activation Without Reset: N/A 00:07:23.673 Multiple Update Detection Support: N/A 00:07:23.673 Firmware Update Granularity: No Information Provided 00:07:23.673 Per-Namespace SMART Log: Yes 00:07:23.673 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.673 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:23.673 Command Effects Log Page: Supported 00:07:23.673 Get Log Page Extended Data: Supported 00:07:23.673 Telemetry Log Pages: Not Supported 00:07:23.673 Persistent Event Log Pages: Not Supported 00:07:23.673 Supported Log Pages Log Page: May Support 00:07:23.673 Commands Supported & Effects Log Page: Not Supported 00:07:23.673 Feature Identifiers & Effects Log Page:May Support 00:07:23.673 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.673 Data Area 4 for Telemetry Log: Not Supported 00:07:23.673 Error Log Page Entries Supported: 1 00:07:23.673 Keep Alive: Not Supported 00:07:23.673 00:07:23.673 NVM Command Set Attributes 00:07:23.673 ========================== 00:07:23.673 Submission Queue Entry Size 00:07:23.673 Max: 64 00:07:23.673 Min: 64 00:07:23.673 Completion Queue Entry Size 00:07:23.673 Max: 16 00:07:23.673 Min: 16 00:07:23.673 Number of Namespaces: 256 00:07:23.673 Compare Command: Supported 00:07:23.673 Write Uncorrectable Command: Not Supported 00:07:23.673 Dataset Management Command: Supported 00:07:23.674 Write Zeroes Command: Supported 00:07:23.674 Set Features Save Field: Supported 00:07:23.674 Reservations: Not Supported 00:07:23.674 Timestamp: Supported 00:07:23.674 Copy: Supported 00:07:23.674 Volatile Write Cache: Present 00:07:23.674 Atomic Write Unit (Normal): 1 00:07:23.674 Atomic Write Unit (PFail): 1 00:07:23.674 Atomic Compare & Write Unit: 1 00:07:23.674 Fused Compare & Write: Not Supported 00:07:23.674 Scatter-Gather List 00:07:23.674 SGL Command Set: Supported 00:07:23.674 SGL Keyed: Not Supported 00:07:23.674 SGL Bit Bucket Descriptor: Not Supported 00:07:23.674 SGL Metadata Pointer: Not Supported 00:07:23.674 Oversized SGL: Not Supported 00:07:23.674 SGL Metadata Address: Not Supported 00:07:23.674 SGL Offset: Not Supported 00:07:23.674 Transport SGL Data Block: Not Supported 00:07:23.674 Replay Protected Memory Block: Not Supported 00:07:23.674 00:07:23.674 Firmware Slot Information 00:07:23.674 ========================= 00:07:23.674 Active slot: 1 00:07:23.674 Slot 1 Firmware Revision: 1.0 00:07:23.674 00:07:23.674 00:07:23.674 Commands Supported and Effects 00:07:23.674 ============================== 00:07:23.674 Admin Commands 00:07:23.674 -------------- 00:07:23.674 Delete I/O Submission Queue (00h): Supported 00:07:23.674 Create I/O Submission Queue (01h): Supported 00:07:23.674 Get Log Page (02h): Supported 00:07:23.674 Delete I/O Completion Queue (04h): Supported 00:07:23.674 Create I/O Completion Queue (05h): Supported 00:07:23.674 Identify (06h): Supported 00:07:23.674 Abort (08h): Supported 00:07:23.674 Set Features (09h): Supported 00:07:23.674 Get Features (0Ah): Supported 00:07:23.674 Asynchronous Event Request (0Ch): Supported 00:07:23.674 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.674 Directive Send (19h): Supported 00:07:23.674 Directive Receive (1Ah): Supported 00:07:23.674 Virtualization Management (1Ch): Supported 00:07:23.674 Doorbell Buffer Config (7Ch): Supported 00:07:23.674 Format NVM (80h): Supported LBA-Change 00:07:23.674 I/O Commands 00:07:23.674 ------------ 00:07:23.674 Flush (00h): Supported LBA-Change 00:07:23.674 Write (01h): Supported LBA-Change 00:07:23.674 Read (02h): Supported 00:07:23.674 Compare (05h): Supported 00:07:23.674 Write Zeroes (08h): Supported LBA-Change 00:07:23.674 Dataset Management (09h): Supported LBA-Change 00:07:23.674 Unknown (0Ch): Supported 00:07:23.674 Unknown (12h): Supported 00:07:23.674 Copy (19h): Supported LBA-Change 00:07:23.674 Unknown (1Dh): Supported LBA-Change 00:07:23.674 00:07:23.674 Error Log 00:07:23.674 ========= 00:07:23.674 00:07:23.674 Arbitration 00:07:23.674 =========== 00:07:23.674 Arbitration Burst: no limit 00:07:23.674 00:07:23.674 Power Management 00:07:23.674 ================ 00:07:23.674 Number of Power States: 1 00:07:23.674 Current Power State: Power State #0 00:07:23.674 Power State #0: 00:07:23.674 Max Power: 25.00 W 00:07:23.674 Non-Operational State: Operational 00:07:23.674 Entry Latency: 16 microseconds 00:07:23.674 Exit Latency: 4 microseconds 00:07:23.674 Relative Read Throughput: 0 00:07:23.674 Relative Read Latency: 0 00:07:23.674 Relative Write Throughput: 0 00:07:23.674 Relative Write Latency: 0 00:07:23.674 Idle Power: Not Reported 00:07:23.674 Active Power: Not Reported 00:07:23.674 Non-Operational Permissive Mode: Not Supported 00:07:23.674 00:07:23.674 Health Information 00:07:23.674 ================== 00:07:23.674 Critical Warnings: 00:07:23.674 Available Spare Space: OK 00:07:23.674 Temperature: OK 00:07:23.674 Device Reliability: OK 00:07:23.674 Read Only: No 00:07:23.674 Volatile Memory Backup: OK 00:07:23.674 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.674 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.674 Available Spare: 0% 00:07:23.674 Available Spare Threshold: 0% 00:07:23.674 Life Percentage Used: 0% 00:07:23.674 Data Units Read: 2391 00:07:23.674 Data Units Written: 2178 00:07:23.674 Host Read Commands: 117078 00:07:23.674 Host Write Commands: 115349 00:07:23.674 Controller Busy Time: 0 minutes 00:07:23.674 Power Cycles: 0 00:07:23.674 Power On Hours: 0 hours 00:07:23.674 Unsafe Shutdowns: 0 00:07:23.674 Unrecoverable Media Errors: 0 00:07:23.674 Lifetime Error Log Entries: 0 00:07:23.674 Warning Temperature Time: 0 minutes 00:07:23.674 Critical Temperature Time: 0 minutes 00:07:23.674 00:07:23.674 Number of Queues 00:07:23.674 ================ 00:07:23.674 Number of I/O Submission Queues: 64 00:07:23.674 Number of I/O Completion Queues: 64 00:07:23.674 00:07:23.674 ZNS Specific Controller Data 00:07:23.674 ============================ 00:07:23.674 Zone Append Size Limit: 0 00:07:23.674 00:07:23.674 00:07:23.674 Active Namespaces 00:07:23.674 ================= 00:07:23.674 Namespace ID:1 00:07:23.674 Error Recovery Timeout: Unlimited 00:07:23.674 Command Set Identifier: NVM (00h) 00:07:23.674 Deallocate: Supported 00:07:23.674 Deallocated/Unwritten Error: Supported 00:07:23.674 Deallocated Read Value: All 0x00 00:07:23.674 Deallocate in Write Zeroes: Not Supported 00:07:23.674 Deallocated Guard Field: 0xFFFF 00:07:23.674 Flush: Supported 00:07:23.674 Reservation: Not Supported 00:07:23.674 Namespace Sharing Capabilities: Private 00:07:23.674 Size (in LBAs): 1048576 (4GiB) 00:07:23.674 Capacity (in LBAs): 1048576 (4GiB) 00:07:23.674 Utilization (in LBAs): 1048576 (4GiB) 00:07:23.674 Thin Provisioning: Not Supported 00:07:23.674 Per-NS Atomic Units: No 00:07:23.674 Maximum Single Source Range Length: 128 00:07:23.674 Maximum Copy Length: 128 00:07:23.674 Maximum Source Range Count: 128 00:07:23.674 NGUID/EUI64 Never Reused: No 00:07:23.674 Namespace Write Protected: No 00:07:23.674 Number of LBA Formats: 8 00:07:23.674 Current LBA Format: LBA Format #04 00:07:23.674 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.674 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.674 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.674 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.674 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.674 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.674 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.674 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.674 00:07:23.674 NVM Specific Namespace Data 00:07:23.674 =========================== 00:07:23.674 Logical Block Storage Tag Mask: 0 00:07:23.674 Protection Information Capabilities: 00:07:23.674 16b Guard Protection Information Storage Tag Support: No 00:07:23.674 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.674 Storage Tag Check Read Support: No 00:07:23.674 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.674 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.674 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.674 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.674 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.674 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.674 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.674 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.674 Namespace ID:2 00:07:23.674 Error Recovery Timeout: Unlimited 00:07:23.674 Command Set Identifier: NVM (00h) 00:07:23.674 Deallocate: Supported 00:07:23.674 Deallocated/Unwritten Error: Supported 00:07:23.674 Deallocated Read Value: All 0x00 00:07:23.674 Deallocate in Write Zeroes: Not Supported 00:07:23.674 Deallocated Guard Field: 0xFFFF 00:07:23.674 Flush: Supported 00:07:23.674 Reservation: Not Supported 00:07:23.674 Namespace Sharing Capabilities: Private 00:07:23.674 Size (in LBAs): 1048576 (4GiB) 00:07:23.674 Capacity (in LBAs): 1048576 (4GiB) 00:07:23.674 Utilization (in LBAs): 1048576 (4GiB) 00:07:23.674 Thin Provisioning: Not Supported 00:07:23.674 Per-NS Atomic Units: No 00:07:23.674 Maximum Single Source Range Length: 128 00:07:23.674 Maximum Copy Length: 128 00:07:23.674 Maximum Source Range Count: 128 00:07:23.674 NGUID/EUI64 Never Reused: No 00:07:23.674 Namespace Write Protected: No 00:07:23.674 Number of LBA Formats: 8 00:07:23.674 Current LBA Format: LBA Format #04 00:07:23.674 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.674 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.674 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.674 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.674 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.674 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.674 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.674 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.674 00:07:23.674 NVM Specific Namespace Data 00:07:23.674 =========================== 00:07:23.674 Logical Block Storage Tag Mask: 0 00:07:23.674 Protection Information Capabilities: 00:07:23.675 16b Guard Protection Information Storage Tag Support: No 00:07:23.675 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.675 Storage Tag Check Read Support: No 00:07:23.675 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Namespace ID:3 00:07:23.675 Error Recovery Timeout: Unlimited 00:07:23.675 Command Set Identifier: NVM (00h) 00:07:23.675 Deallocate: Supported 00:07:23.675 Deallocated/Unwritten Error: Supported 00:07:23.675 Deallocated Read Value: All 0x00 00:07:23.675 Deallocate in Write Zeroes: Not Supported 00:07:23.675 Deallocated Guard Field: 0xFFFF 00:07:23.675 Flush: Supported 00:07:23.675 Reservation: Not Supported 00:07:23.675 Namespace Sharing Capabilities: Private 00:07:23.675 Size (in LBAs): 1048576 (4GiB) 00:07:23.675 Capacity (in LBAs): 1048576 (4GiB) 00:07:23.675 Utilization (in LBAs): 1048576 (4GiB) 00:07:23.675 Thin Provisioning: Not Supported 00:07:23.675 Per-NS Atomic Units: No 00:07:23.675 Maximum Single Source Range Length: 128 00:07:23.675 Maximum Copy Length: 128 00:07:23.675 Maximum Source Range Count: 128 00:07:23.675 NGUID/EUI64 Never Reused: No 00:07:23.675 Namespace Write Protected: No 00:07:23.675 Number of LBA Formats: 8 00:07:23.675 Current LBA Format: LBA Format #04 00:07:23.675 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.675 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.675 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.675 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.675 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.675 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.675 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.675 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.675 00:07:23.675 NVM Specific Namespace Data 00:07:23.675 =========================== 00:07:23.675 Logical Block Storage Tag Mask: 0 00:07:23.675 Protection Information Capabilities: 00:07:23.675 16b Guard Protection Information Storage Tag Support: No 00:07:23.675 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.675 Storage Tag Check Read Support: No 00:07:23.675 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.675 13:20:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:23.675 13:20:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:23.936 ===================================================== 00:07:23.936 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:23.936 ===================================================== 00:07:23.936 Controller Capabilities/Features 00:07:23.936 ================================ 00:07:23.936 Vendor ID: 1b36 00:07:23.936 Subsystem Vendor ID: 1af4 00:07:23.936 Serial Number: 12340 00:07:23.936 Model Number: QEMU NVMe Ctrl 00:07:23.936 Firmware Version: 8.0.0 00:07:23.936 Recommended Arb Burst: 6 00:07:23.936 IEEE OUI Identifier: 00 54 52 00:07:23.936 Multi-path I/O 00:07:23.936 May have multiple subsystem ports: No 00:07:23.936 May have multiple controllers: No 00:07:23.936 Associated with SR-IOV VF: No 00:07:23.936 Max Data Transfer Size: 524288 00:07:23.936 Max Number of Namespaces: 256 00:07:23.936 Max Number of I/O Queues: 64 00:07:23.936 NVMe Specification Version (VS): 1.4 00:07:23.936 NVMe Specification Version (Identify): 1.4 00:07:23.936 Maximum Queue Entries: 2048 00:07:23.936 Contiguous Queues Required: Yes 00:07:23.936 Arbitration Mechanisms Supported 00:07:23.936 Weighted Round Robin: Not Supported 00:07:23.936 Vendor Specific: Not Supported 00:07:23.936 Reset Timeout: 7500 ms 00:07:23.936 Doorbell Stride: 4 bytes 00:07:23.936 NVM Subsystem Reset: Not Supported 00:07:23.936 Command Sets Supported 00:07:23.936 NVM Command Set: Supported 00:07:23.937 Boot Partition: Not Supported 00:07:23.937 Memory Page Size Minimum: 4096 bytes 00:07:23.937 Memory Page Size Maximum: 65536 bytes 00:07:23.937 Persistent Memory Region: Not Supported 00:07:23.937 Optional Asynchronous Events Supported 00:07:23.937 Namespace Attribute Notices: Supported 00:07:23.937 Firmware Activation Notices: Not Supported 00:07:23.937 ANA Change Notices: Not Supported 00:07:23.937 PLE Aggregate Log Change Notices: Not Supported 00:07:23.937 LBA Status Info Alert Notices: Not Supported 00:07:23.937 EGE Aggregate Log Change Notices: Not Supported 00:07:23.937 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.937 Zone Descriptor Change Notices: Not Supported 00:07:23.937 Discovery Log Change Notices: Not Supported 00:07:23.937 Controller Attributes 00:07:23.937 128-bit Host Identifier: Not Supported 00:07:23.937 Non-Operational Permissive Mode: Not Supported 00:07:23.937 NVM Sets: Not Supported 00:07:23.937 Read Recovery Levels: Not Supported 00:07:23.937 Endurance Groups: Not Supported 00:07:23.937 Predictable Latency Mode: Not Supported 00:07:23.937 Traffic Based Keep ALive: Not Supported 00:07:23.937 Namespace Granularity: Not Supported 00:07:23.937 SQ Associations: Not Supported 00:07:23.937 UUID List: Not Supported 00:07:23.937 Multi-Domain Subsystem: Not Supported 00:07:23.937 Fixed Capacity Management: Not Supported 00:07:23.937 Variable Capacity Management: Not Supported 00:07:23.937 Delete Endurance Group: Not Supported 00:07:23.937 Delete NVM Set: Not Supported 00:07:23.937 Extended LBA Formats Supported: Supported 00:07:23.937 Flexible Data Placement Supported: Not Supported 00:07:23.937 00:07:23.937 Controller Memory Buffer Support 00:07:23.937 ================================ 00:07:23.937 Supported: No 00:07:23.937 00:07:23.937 Persistent Memory Region Support 00:07:23.937 ================================ 00:07:23.937 Supported: No 00:07:23.937 00:07:23.937 Admin Command Set Attributes 00:07:23.937 ============================ 00:07:23.937 Security Send/Receive: Not Supported 00:07:23.937 Format NVM: Supported 00:07:23.937 Firmware Activate/Download: Not Supported 00:07:23.937 Namespace Management: Supported 00:07:23.937 Device Self-Test: Not Supported 00:07:23.937 Directives: Supported 00:07:23.937 NVMe-MI: Not Supported 00:07:23.937 Virtualization Management: Not Supported 00:07:23.937 Doorbell Buffer Config: Supported 00:07:23.937 Get LBA Status Capability: Not Supported 00:07:23.937 Command & Feature Lockdown Capability: Not Supported 00:07:23.937 Abort Command Limit: 4 00:07:23.937 Async Event Request Limit: 4 00:07:23.937 Number of Firmware Slots: N/A 00:07:23.937 Firmware Slot 1 Read-Only: N/A 00:07:23.937 Firmware Activation Without Reset: N/A 00:07:23.937 Multiple Update Detection Support: N/A 00:07:23.937 Firmware Update Granularity: No Information Provided 00:07:23.937 Per-Namespace SMART Log: Yes 00:07:23.937 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.937 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:23.937 Command Effects Log Page: Supported 00:07:23.937 Get Log Page Extended Data: Supported 00:07:23.937 Telemetry Log Pages: Not Supported 00:07:23.937 Persistent Event Log Pages: Not Supported 00:07:23.937 Supported Log Pages Log Page: May Support 00:07:23.937 Commands Supported & Effects Log Page: Not Supported 00:07:23.937 Feature Identifiers & Effects Log Page:May Support 00:07:23.937 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.937 Data Area 4 for Telemetry Log: Not Supported 00:07:23.937 Error Log Page Entries Supported: 1 00:07:23.937 Keep Alive: Not Supported 00:07:23.937 00:07:23.937 NVM Command Set Attributes 00:07:23.937 ========================== 00:07:23.937 Submission Queue Entry Size 00:07:23.937 Max: 64 00:07:23.937 Min: 64 00:07:23.937 Completion Queue Entry Size 00:07:23.937 Max: 16 00:07:23.937 Min: 16 00:07:23.937 Number of Namespaces: 256 00:07:23.937 Compare Command: Supported 00:07:23.937 Write Uncorrectable Command: Not Supported 00:07:23.937 Dataset Management Command: Supported 00:07:23.937 Write Zeroes Command: Supported 00:07:23.937 Set Features Save Field: Supported 00:07:23.937 Reservations: Not Supported 00:07:23.937 Timestamp: Supported 00:07:23.937 Copy: Supported 00:07:23.937 Volatile Write Cache: Present 00:07:23.937 Atomic Write Unit (Normal): 1 00:07:23.937 Atomic Write Unit (PFail): 1 00:07:23.937 Atomic Compare & Write Unit: 1 00:07:23.937 Fused Compare & Write: Not Supported 00:07:23.937 Scatter-Gather List 00:07:23.937 SGL Command Set: Supported 00:07:23.937 SGL Keyed: Not Supported 00:07:23.937 SGL Bit Bucket Descriptor: Not Supported 00:07:23.937 SGL Metadata Pointer: Not Supported 00:07:23.937 Oversized SGL: Not Supported 00:07:23.937 SGL Metadata Address: Not Supported 00:07:23.937 SGL Offset: Not Supported 00:07:23.937 Transport SGL Data Block: Not Supported 00:07:23.937 Replay Protected Memory Block: Not Supported 00:07:23.937 00:07:23.937 Firmware Slot Information 00:07:23.937 ========================= 00:07:23.937 Active slot: 1 00:07:23.937 Slot 1 Firmware Revision: 1.0 00:07:23.937 00:07:23.937 00:07:23.937 Commands Supported and Effects 00:07:23.937 ============================== 00:07:23.937 Admin Commands 00:07:23.937 -------------- 00:07:23.937 Delete I/O Submission Queue (00h): Supported 00:07:23.937 Create I/O Submission Queue (01h): Supported 00:07:23.937 Get Log Page (02h): Supported 00:07:23.937 Delete I/O Completion Queue (04h): Supported 00:07:23.937 Create I/O Completion Queue (05h): Supported 00:07:23.937 Identify (06h): Supported 00:07:23.937 Abort (08h): Supported 00:07:23.937 Set Features (09h): Supported 00:07:23.937 Get Features (0Ah): Supported 00:07:23.937 Asynchronous Event Request (0Ch): Supported 00:07:23.937 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.937 Directive Send (19h): Supported 00:07:23.937 Directive Receive (1Ah): Supported 00:07:23.937 Virtualization Management (1Ch): Supported 00:07:23.937 Doorbell Buffer Config (7Ch): Supported 00:07:23.937 Format NVM (80h): Supported LBA-Change 00:07:23.937 I/O Commands 00:07:23.937 ------------ 00:07:23.937 Flush (00h): Supported LBA-Change 00:07:23.937 Write (01h): Supported LBA-Change 00:07:23.937 Read (02h): Supported 00:07:23.937 Compare (05h): Supported 00:07:23.937 Write Zeroes (08h): Supported LBA-Change 00:07:23.937 Dataset Management (09h): Supported LBA-Change 00:07:23.937 Unknown (0Ch): Supported 00:07:23.937 Unknown (12h): Supported 00:07:23.938 Copy (19h): Supported LBA-Change 00:07:23.938 Unknown (1Dh): Supported LBA-Change 00:07:23.938 00:07:23.938 Error Log 00:07:23.938 ========= 00:07:23.938 00:07:23.938 Arbitration 00:07:23.938 =========== 00:07:23.938 Arbitration Burst: no limit 00:07:23.938 00:07:23.938 Power Management 00:07:23.938 ================ 00:07:23.938 Number of Power States: 1 00:07:23.938 Current Power State: Power State #0 00:07:23.938 Power State #0: 00:07:23.938 Max Power: 25.00 W 00:07:23.938 Non-Operational State: Operational 00:07:23.938 Entry Latency: 16 microseconds 00:07:23.938 Exit Latency: 4 microseconds 00:07:23.938 Relative Read Throughput: 0 00:07:23.938 Relative Read Latency: 0 00:07:23.938 Relative Write Throughput: 0 00:07:23.938 Relative Write Latency: 0 00:07:23.938 Idle Power: Not Reported 00:07:23.938 Active Power: Not Reported 00:07:23.938 Non-Operational Permissive Mode: Not Supported 00:07:23.938 00:07:23.938 Health Information 00:07:23.938 ================== 00:07:23.938 Critical Warnings: 00:07:23.938 Available Spare Space: OK 00:07:23.938 Temperature: OK 00:07:23.938 Device Reliability: OK 00:07:23.938 Read Only: No 00:07:23.938 Volatile Memory Backup: OK 00:07:23.938 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.938 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.938 Available Spare: 0% 00:07:23.938 Available Spare Threshold: 0% 00:07:23.938 Life Percentage Used: 0% 00:07:23.938 Data Units Read: 738 00:07:23.938 Data Units Written: 666 00:07:23.938 Host Read Commands: 38426 00:07:23.938 Host Write Commands: 38212 00:07:23.938 Controller Busy Time: 0 minutes 00:07:23.938 Power Cycles: 0 00:07:23.938 Power On Hours: 0 hours 00:07:23.938 Unsafe Shutdowns: 0 00:07:23.938 Unrecoverable Media Errors: 0 00:07:23.938 Lifetime Error Log Entries: 0 00:07:23.938 Warning Temperature Time: 0 minutes 00:07:23.938 Critical Temperature Time: 0 minutes 00:07:23.938 00:07:23.938 Number of Queues 00:07:23.938 ================ 00:07:23.938 Number of I/O Submission Queues: 64 00:07:23.938 Number of I/O Completion Queues: 64 00:07:23.938 00:07:23.938 ZNS Specific Controller Data 00:07:23.938 ============================ 00:07:23.938 Zone Append Size Limit: 0 00:07:23.938 00:07:23.938 00:07:23.938 Active Namespaces 00:07:23.938 ================= 00:07:23.938 Namespace ID:1 00:07:23.938 Error Recovery Timeout: Unlimited 00:07:23.938 Command Set Identifier: NVM (00h) 00:07:23.938 Deallocate: Supported 00:07:23.938 Deallocated/Unwritten Error: Supported 00:07:23.938 Deallocated Read Value: All 0x00 00:07:23.938 Deallocate in Write Zeroes: Not Supported 00:07:23.938 Deallocated Guard Field: 0xFFFF 00:07:23.938 Flush: Supported 00:07:23.938 Reservation: Not Supported 00:07:23.938 Metadata Transferred as: Separate Metadata Buffer 00:07:23.938 Namespace Sharing Capabilities: Private 00:07:23.938 Size (in LBAs): 1548666 (5GiB) 00:07:23.938 Capacity (in LBAs): 1548666 (5GiB) 00:07:23.938 Utilization (in LBAs): 1548666 (5GiB) 00:07:23.938 Thin Provisioning: Not Supported 00:07:23.938 Per-NS Atomic Units: No 00:07:23.938 Maximum Single Source Range Length: 128 00:07:23.938 Maximum Copy Length: 128 00:07:23.938 Maximum Source Range Count: 128 00:07:23.938 NGUID/EUI64 Never Reused: No 00:07:23.938 Namespace Write Protected: No 00:07:23.938 Number of LBA Formats: 8 00:07:23.938 Current LBA Format: LBA Format #07 00:07:23.938 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.938 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.938 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.938 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.938 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.938 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.938 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.938 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.938 00:07:23.938 NVM Specific Namespace Data 00:07:23.938 =========================== 00:07:23.938 Logical Block Storage Tag Mask: 0 00:07:23.938 Protection Information Capabilities: 00:07:23.938 16b Guard Protection Information Storage Tag Support: No 00:07:23.938 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.938 Storage Tag Check Read Support: No 00:07:23.938 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.938 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.938 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.938 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.938 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.938 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.938 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.938 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.938 13:20:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:23.938 13:20:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:24.201 ===================================================== 00:07:24.201 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:24.201 ===================================================== 00:07:24.201 Controller Capabilities/Features 00:07:24.201 ================================ 00:07:24.201 Vendor ID: 1b36 00:07:24.201 Subsystem Vendor ID: 1af4 00:07:24.201 Serial Number: 12341 00:07:24.201 Model Number: QEMU NVMe Ctrl 00:07:24.201 Firmware Version: 8.0.0 00:07:24.201 Recommended Arb Burst: 6 00:07:24.201 IEEE OUI Identifier: 00 54 52 00:07:24.201 Multi-path I/O 00:07:24.201 May have multiple subsystem ports: No 00:07:24.201 May have multiple controllers: No 00:07:24.201 Associated with SR-IOV VF: No 00:07:24.201 Max Data Transfer Size: 524288 00:07:24.201 Max Number of Namespaces: 256 00:07:24.201 Max Number of I/O Queues: 64 00:07:24.201 NVMe Specification Version (VS): 1.4 00:07:24.201 NVMe Specification Version (Identify): 1.4 00:07:24.201 Maximum Queue Entries: 2048 00:07:24.201 Contiguous Queues Required: Yes 00:07:24.201 Arbitration Mechanisms Supported 00:07:24.201 Weighted Round Robin: Not Supported 00:07:24.201 Vendor Specific: Not Supported 00:07:24.201 Reset Timeout: 7500 ms 00:07:24.201 Doorbell Stride: 4 bytes 00:07:24.201 NVM Subsystem Reset: Not Supported 00:07:24.201 Command Sets Supported 00:07:24.201 NVM Command Set: Supported 00:07:24.201 Boot Partition: Not Supported 00:07:24.201 Memory Page Size Minimum: 4096 bytes 00:07:24.201 Memory Page Size Maximum: 65536 bytes 00:07:24.201 Persistent Memory Region: Not Supported 00:07:24.201 Optional Asynchronous Events Supported 00:07:24.201 Namespace Attribute Notices: Supported 00:07:24.201 Firmware Activation Notices: Not Supported 00:07:24.201 ANA Change Notices: Not Supported 00:07:24.201 PLE Aggregate Log Change Notices: Not Supported 00:07:24.201 LBA Status Info Alert Notices: Not Supported 00:07:24.201 EGE Aggregate Log Change Notices: Not Supported 00:07:24.201 Normal NVM Subsystem Shutdown event: Not Supported 00:07:24.201 Zone Descriptor Change Notices: Not Supported 00:07:24.201 Discovery Log Change Notices: Not Supported 00:07:24.201 Controller Attributes 00:07:24.201 128-bit Host Identifier: Not Supported 00:07:24.201 Non-Operational Permissive Mode: Not Supported 00:07:24.201 NVM Sets: Not Supported 00:07:24.201 Read Recovery Levels: Not Supported 00:07:24.201 Endurance Groups: Not Supported 00:07:24.201 Predictable Latency Mode: Not Supported 00:07:24.201 Traffic Based Keep ALive: Not Supported 00:07:24.201 Namespace Granularity: Not Supported 00:07:24.201 SQ Associations: Not Supported 00:07:24.201 UUID List: Not Supported 00:07:24.201 Multi-Domain Subsystem: Not Supported 00:07:24.201 Fixed Capacity Management: Not Supported 00:07:24.201 Variable Capacity Management: Not Supported 00:07:24.201 Delete Endurance Group: Not Supported 00:07:24.201 Delete NVM Set: Not Supported 00:07:24.201 Extended LBA Formats Supported: Supported 00:07:24.201 Flexible Data Placement Supported: Not Supported 00:07:24.201 00:07:24.201 Controller Memory Buffer Support 00:07:24.201 ================================ 00:07:24.201 Supported: No 00:07:24.201 00:07:24.201 Persistent Memory Region Support 00:07:24.201 ================================ 00:07:24.201 Supported: No 00:07:24.201 00:07:24.201 Admin Command Set Attributes 00:07:24.201 ============================ 00:07:24.201 Security Send/Receive: Not Supported 00:07:24.201 Format NVM: Supported 00:07:24.201 Firmware Activate/Download: Not Supported 00:07:24.201 Namespace Management: Supported 00:07:24.201 Device Self-Test: Not Supported 00:07:24.201 Directives: Supported 00:07:24.201 NVMe-MI: Not Supported 00:07:24.201 Virtualization Management: Not Supported 00:07:24.201 Doorbell Buffer Config: Supported 00:07:24.201 Get LBA Status Capability: Not Supported 00:07:24.201 Command & Feature Lockdown Capability: Not Supported 00:07:24.201 Abort Command Limit: 4 00:07:24.201 Async Event Request Limit: 4 00:07:24.201 Number of Firmware Slots: N/A 00:07:24.201 Firmware Slot 1 Read-Only: N/A 00:07:24.201 Firmware Activation Without Reset: N/A 00:07:24.201 Multiple Update Detection Support: N/A 00:07:24.201 Firmware Update Granularity: No Information Provided 00:07:24.201 Per-Namespace SMART Log: Yes 00:07:24.201 Asymmetric Namespace Access Log Page: Not Supported 00:07:24.201 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:24.201 Command Effects Log Page: Supported 00:07:24.201 Get Log Page Extended Data: Supported 00:07:24.201 Telemetry Log Pages: Not Supported 00:07:24.201 Persistent Event Log Pages: Not Supported 00:07:24.201 Supported Log Pages Log Page: May Support 00:07:24.201 Commands Supported & Effects Log Page: Not Supported 00:07:24.201 Feature Identifiers & Effects Log Page:May Support 00:07:24.201 NVMe-MI Commands & Effects Log Page: May Support 00:07:24.201 Data Area 4 for Telemetry Log: Not Supported 00:07:24.201 Error Log Page Entries Supported: 1 00:07:24.201 Keep Alive: Not Supported 00:07:24.201 00:07:24.201 NVM Command Set Attributes 00:07:24.201 ========================== 00:07:24.201 Submission Queue Entry Size 00:07:24.201 Max: 64 00:07:24.201 Min: 64 00:07:24.201 Completion Queue Entry Size 00:07:24.201 Max: 16 00:07:24.201 Min: 16 00:07:24.201 Number of Namespaces: 256 00:07:24.201 Compare Command: Supported 00:07:24.201 Write Uncorrectable Command: Not Supported 00:07:24.201 Dataset Management Command: Supported 00:07:24.201 Write Zeroes Command: Supported 00:07:24.201 Set Features Save Field: Supported 00:07:24.201 Reservations: Not Supported 00:07:24.201 Timestamp: Supported 00:07:24.201 Copy: Supported 00:07:24.201 Volatile Write Cache: Present 00:07:24.201 Atomic Write Unit (Normal): 1 00:07:24.201 Atomic Write Unit (PFail): 1 00:07:24.201 Atomic Compare & Write Unit: 1 00:07:24.201 Fused Compare & Write: Not Supported 00:07:24.201 Scatter-Gather List 00:07:24.201 SGL Command Set: Supported 00:07:24.201 SGL Keyed: Not Supported 00:07:24.201 SGL Bit Bucket Descriptor: Not Supported 00:07:24.201 SGL Metadata Pointer: Not Supported 00:07:24.201 Oversized SGL: Not Supported 00:07:24.201 SGL Metadata Address: Not Supported 00:07:24.201 SGL Offset: Not Supported 00:07:24.201 Transport SGL Data Block: Not Supported 00:07:24.201 Replay Protected Memory Block: Not Supported 00:07:24.201 00:07:24.201 Firmware Slot Information 00:07:24.201 ========================= 00:07:24.201 Active slot: 1 00:07:24.201 Slot 1 Firmware Revision: 1.0 00:07:24.201 00:07:24.201 00:07:24.201 Commands Supported and Effects 00:07:24.201 ============================== 00:07:24.201 Admin Commands 00:07:24.201 -------------- 00:07:24.201 Delete I/O Submission Queue (00h): Supported 00:07:24.201 Create I/O Submission Queue (01h): Supported 00:07:24.201 Get Log Page (02h): Supported 00:07:24.201 Delete I/O Completion Queue (04h): Supported 00:07:24.201 Create I/O Completion Queue (05h): Supported 00:07:24.201 Identify (06h): Supported 00:07:24.201 Abort (08h): Supported 00:07:24.201 Set Features (09h): Supported 00:07:24.201 Get Features (0Ah): Supported 00:07:24.201 Asynchronous Event Request (0Ch): Supported 00:07:24.202 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:24.202 Directive Send (19h): Supported 00:07:24.202 Directive Receive (1Ah): Supported 00:07:24.202 Virtualization Management (1Ch): Supported 00:07:24.202 Doorbell Buffer Config (7Ch): Supported 00:07:24.202 Format NVM (80h): Supported LBA-Change 00:07:24.202 I/O Commands 00:07:24.202 ------------ 00:07:24.202 Flush (00h): Supported LBA-Change 00:07:24.202 Write (01h): Supported LBA-Change 00:07:24.202 Read (02h): Supported 00:07:24.202 Compare (05h): Supported 00:07:24.202 Write Zeroes (08h): Supported LBA-Change 00:07:24.202 Dataset Management (09h): Supported LBA-Change 00:07:24.202 Unknown (0Ch): Supported 00:07:24.202 Unknown (12h): Supported 00:07:24.202 Copy (19h): Supported LBA-Change 00:07:24.202 Unknown (1Dh): Supported LBA-Change 00:07:24.202 00:07:24.202 Error Log 00:07:24.202 ========= 00:07:24.202 00:07:24.202 Arbitration 00:07:24.202 =========== 00:07:24.202 Arbitration Burst: no limit 00:07:24.202 00:07:24.202 Power Management 00:07:24.202 ================ 00:07:24.202 Number of Power States: 1 00:07:24.202 Current Power State: Power State #0 00:07:24.202 Power State #0: 00:07:24.202 Max Power: 25.00 W 00:07:24.202 Non-Operational State: Operational 00:07:24.202 Entry Latency: 16 microseconds 00:07:24.202 Exit Latency: 4 microseconds 00:07:24.202 Relative Read Throughput: 0 00:07:24.202 Relative Read Latency: 0 00:07:24.202 Relative Write Throughput: 0 00:07:24.202 Relative Write Latency: 0 00:07:24.202 Idle Power: Not Reported 00:07:24.202 Active Power: Not Reported 00:07:24.202 Non-Operational Permissive Mode: Not Supported 00:07:24.202 00:07:24.202 Health Information 00:07:24.202 ================== 00:07:24.202 Critical Warnings: 00:07:24.202 Available Spare Space: OK 00:07:24.202 Temperature: OK 00:07:24.202 Device Reliability: OK 00:07:24.202 Read Only: No 00:07:24.202 Volatile Memory Backup: OK 00:07:24.202 Current Temperature: 323 Kelvin (50 Celsius) 00:07:24.202 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:24.202 Available Spare: 0% 00:07:24.202 Available Spare Threshold: 0% 00:07:24.202 Life Percentage Used: 0% 00:07:24.202 Data Units Read: 1128 00:07:24.202 Data Units Written: 995 00:07:24.202 Host Read Commands: 56861 00:07:24.202 Host Write Commands: 55665 00:07:24.202 Controller Busy Time: 0 minutes 00:07:24.202 Power Cycles: 0 00:07:24.202 Power On Hours: 0 hours 00:07:24.202 Unsafe Shutdowns: 0 00:07:24.202 Unrecoverable Media Errors: 0 00:07:24.202 Lifetime Error Log Entries: 0 00:07:24.202 Warning Temperature Time: 0 minutes 00:07:24.202 Critical Temperature Time: 0 minutes 00:07:24.202 00:07:24.202 Number of Queues 00:07:24.202 ================ 00:07:24.202 Number of I/O Submission Queues: 64 00:07:24.202 Number of I/O Completion Queues: 64 00:07:24.202 00:07:24.202 ZNS Specific Controller Data 00:07:24.202 ============================ 00:07:24.202 Zone Append Size Limit: 0 00:07:24.202 00:07:24.202 00:07:24.202 Active Namespaces 00:07:24.202 ================= 00:07:24.202 Namespace ID:1 00:07:24.202 Error Recovery Timeout: Unlimited 00:07:24.202 Command Set Identifier: NVM (00h) 00:07:24.202 Deallocate: Supported 00:07:24.202 Deallocated/Unwritten Error: Supported 00:07:24.202 Deallocated Read Value: All 0x00 00:07:24.202 Deallocate in Write Zeroes: Not Supported 00:07:24.202 Deallocated Guard Field: 0xFFFF 00:07:24.202 Flush: Supported 00:07:24.202 Reservation: Not Supported 00:07:24.202 Namespace Sharing Capabilities: Private 00:07:24.202 Size (in LBAs): 1310720 (5GiB) 00:07:24.202 Capacity (in LBAs): 1310720 (5GiB) 00:07:24.202 Utilization (in LBAs): 1310720 (5GiB) 00:07:24.202 Thin Provisioning: Not Supported 00:07:24.202 Per-NS Atomic Units: No 00:07:24.202 Maximum Single Source Range Length: 128 00:07:24.202 Maximum Copy Length: 128 00:07:24.202 Maximum Source Range Count: 128 00:07:24.202 NGUID/EUI64 Never Reused: No 00:07:24.202 Namespace Write Protected: No 00:07:24.202 Number of LBA Formats: 8 00:07:24.202 Current LBA Format: LBA Format #04 00:07:24.202 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.202 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.202 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.202 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.202 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.202 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.202 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.202 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.202 00:07:24.202 NVM Specific Namespace Data 00:07:24.202 =========================== 00:07:24.202 Logical Block Storage Tag Mask: 0 00:07:24.202 Protection Information Capabilities: 00:07:24.202 16b Guard Protection Information Storage Tag Support: No 00:07:24.202 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.202 Storage Tag Check Read Support: No 00:07:24.202 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.202 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.202 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.202 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.202 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.202 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.202 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.202 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.202 13:20:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:24.202 13:20:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:24.464 ===================================================== 00:07:24.464 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:24.464 ===================================================== 00:07:24.464 Controller Capabilities/Features 00:07:24.464 ================================ 00:07:24.464 Vendor ID: 1b36 00:07:24.464 Subsystem Vendor ID: 1af4 00:07:24.464 Serial Number: 12342 00:07:24.464 Model Number: QEMU NVMe Ctrl 00:07:24.464 Firmware Version: 8.0.0 00:07:24.464 Recommended Arb Burst: 6 00:07:24.464 IEEE OUI Identifier: 00 54 52 00:07:24.465 Multi-path I/O 00:07:24.465 May have multiple subsystem ports: No 00:07:24.465 May have multiple controllers: No 00:07:24.465 Associated with SR-IOV VF: No 00:07:24.465 Max Data Transfer Size: 524288 00:07:24.465 Max Number of Namespaces: 256 00:07:24.465 Max Number of I/O Queues: 64 00:07:24.465 NVMe Specification Version (VS): 1.4 00:07:24.465 NVMe Specification Version (Identify): 1.4 00:07:24.465 Maximum Queue Entries: 2048 00:07:24.465 Contiguous Queues Required: Yes 00:07:24.465 Arbitration Mechanisms Supported 00:07:24.465 Weighted Round Robin: Not Supported 00:07:24.465 Vendor Specific: Not Supported 00:07:24.465 Reset Timeout: 7500 ms 00:07:24.465 Doorbell Stride: 4 bytes 00:07:24.465 NVM Subsystem Reset: Not Supported 00:07:24.465 Command Sets Supported 00:07:24.465 NVM Command Set: Supported 00:07:24.465 Boot Partition: Not Supported 00:07:24.465 Memory Page Size Minimum: 4096 bytes 00:07:24.465 Memory Page Size Maximum: 65536 bytes 00:07:24.465 Persistent Memory Region: Not Supported 00:07:24.465 Optional Asynchronous Events Supported 00:07:24.465 Namespace Attribute Notices: Supported 00:07:24.465 Firmware Activation Notices: Not Supported 00:07:24.465 ANA Change Notices: Not Supported 00:07:24.465 PLE Aggregate Log Change Notices: Not Supported 00:07:24.465 LBA Status Info Alert Notices: Not Supported 00:07:24.465 EGE Aggregate Log Change Notices: Not Supported 00:07:24.465 Normal NVM Subsystem Shutdown event: Not Supported 00:07:24.465 Zone Descriptor Change Notices: Not Supported 00:07:24.465 Discovery Log Change Notices: Not Supported 00:07:24.465 Controller Attributes 00:07:24.465 128-bit Host Identifier: Not Supported 00:07:24.465 Non-Operational Permissive Mode: Not Supported 00:07:24.465 NVM Sets: Not Supported 00:07:24.465 Read Recovery Levels: Not Supported 00:07:24.465 Endurance Groups: Not Supported 00:07:24.465 Predictable Latency Mode: Not Supported 00:07:24.465 Traffic Based Keep ALive: Not Supported 00:07:24.465 Namespace Granularity: Not Supported 00:07:24.465 SQ Associations: Not Supported 00:07:24.465 UUID List: Not Supported 00:07:24.465 Multi-Domain Subsystem: Not Supported 00:07:24.465 Fixed Capacity Management: Not Supported 00:07:24.465 Variable Capacity Management: Not Supported 00:07:24.465 Delete Endurance Group: Not Supported 00:07:24.465 Delete NVM Set: Not Supported 00:07:24.465 Extended LBA Formats Supported: Supported 00:07:24.465 Flexible Data Placement Supported: Not Supported 00:07:24.465 00:07:24.465 Controller Memory Buffer Support 00:07:24.465 ================================ 00:07:24.465 Supported: No 00:07:24.465 00:07:24.465 Persistent Memory Region Support 00:07:24.465 ================================ 00:07:24.465 Supported: No 00:07:24.465 00:07:24.465 Admin Command Set Attributes 00:07:24.465 ============================ 00:07:24.465 Security Send/Receive: Not Supported 00:07:24.465 Format NVM: Supported 00:07:24.465 Firmware Activate/Download: Not Supported 00:07:24.465 Namespace Management: Supported 00:07:24.465 Device Self-Test: Not Supported 00:07:24.465 Directives: Supported 00:07:24.465 NVMe-MI: Not Supported 00:07:24.465 Virtualization Management: Not Supported 00:07:24.465 Doorbell Buffer Config: Supported 00:07:24.465 Get LBA Status Capability: Not Supported 00:07:24.465 Command & Feature Lockdown Capability: Not Supported 00:07:24.465 Abort Command Limit: 4 00:07:24.465 Async Event Request Limit: 4 00:07:24.465 Number of Firmware Slots: N/A 00:07:24.465 Firmware Slot 1 Read-Only: N/A 00:07:24.465 Firmware Activation Without Reset: N/A 00:07:24.465 Multiple Update Detection Support: N/A 00:07:24.465 Firmware Update Granularity: No Information Provided 00:07:24.465 Per-Namespace SMART Log: Yes 00:07:24.465 Asymmetric Namespace Access Log Page: Not Supported 00:07:24.465 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:24.465 Command Effects Log Page: Supported 00:07:24.465 Get Log Page Extended Data: Supported 00:07:24.465 Telemetry Log Pages: Not Supported 00:07:24.465 Persistent Event Log Pages: Not Supported 00:07:24.465 Supported Log Pages Log Page: May Support 00:07:24.465 Commands Supported & Effects Log Page: Not Supported 00:07:24.465 Feature Identifiers & Effects Log Page:May Support 00:07:24.465 NVMe-MI Commands & Effects Log Page: May Support 00:07:24.465 Data Area 4 for Telemetry Log: Not Supported 00:07:24.465 Error Log Page Entries Supported: 1 00:07:24.465 Keep Alive: Not Supported 00:07:24.465 00:07:24.465 NVM Command Set Attributes 00:07:24.465 ========================== 00:07:24.465 Submission Queue Entry Size 00:07:24.465 Max: 64 00:07:24.465 Min: 64 00:07:24.465 Completion Queue Entry Size 00:07:24.465 Max: 16 00:07:24.465 Min: 16 00:07:24.465 Number of Namespaces: 256 00:07:24.465 Compare Command: Supported 00:07:24.465 Write Uncorrectable Command: Not Supported 00:07:24.465 Dataset Management Command: Supported 00:07:24.465 Write Zeroes Command: Supported 00:07:24.465 Set Features Save Field: Supported 00:07:24.465 Reservations: Not Supported 00:07:24.465 Timestamp: Supported 00:07:24.465 Copy: Supported 00:07:24.465 Volatile Write Cache: Present 00:07:24.465 Atomic Write Unit (Normal): 1 00:07:24.465 Atomic Write Unit (PFail): 1 00:07:24.465 Atomic Compare & Write Unit: 1 00:07:24.465 Fused Compare & Write: Not Supported 00:07:24.465 Scatter-Gather List 00:07:24.465 SGL Command Set: Supported 00:07:24.465 SGL Keyed: Not Supported 00:07:24.465 SGL Bit Bucket Descriptor: Not Supported 00:07:24.465 SGL Metadata Pointer: Not Supported 00:07:24.465 Oversized SGL: Not Supported 00:07:24.465 SGL Metadata Address: Not Supported 00:07:24.465 SGL Offset: Not Supported 00:07:24.465 Transport SGL Data Block: Not Supported 00:07:24.465 Replay Protected Memory Block: Not Supported 00:07:24.465 00:07:24.465 Firmware Slot Information 00:07:24.465 ========================= 00:07:24.465 Active slot: 1 00:07:24.465 Slot 1 Firmware Revision: 1.0 00:07:24.465 00:07:24.465 00:07:24.465 Commands Supported and Effects 00:07:24.465 ============================== 00:07:24.465 Admin Commands 00:07:24.465 -------------- 00:07:24.465 Delete I/O Submission Queue (00h): Supported 00:07:24.465 Create I/O Submission Queue (01h): Supported 00:07:24.465 Get Log Page (02h): Supported 00:07:24.465 Delete I/O Completion Queue (04h): Supported 00:07:24.465 Create I/O Completion Queue (05h): Supported 00:07:24.465 Identify (06h): Supported 00:07:24.465 Abort (08h): Supported 00:07:24.465 Set Features (09h): Supported 00:07:24.465 Get Features (0Ah): Supported 00:07:24.465 Asynchronous Event Request (0Ch): Supported 00:07:24.465 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:24.465 Directive Send (19h): Supported 00:07:24.465 Directive Receive (1Ah): Supported 00:07:24.465 Virtualization Management (1Ch): Supported 00:07:24.465 Doorbell Buffer Config (7Ch): Supported 00:07:24.465 Format NVM (80h): Supported LBA-Change 00:07:24.465 I/O Commands 00:07:24.465 ------------ 00:07:24.465 Flush (00h): Supported LBA-Change 00:07:24.465 Write (01h): Supported LBA-Change 00:07:24.465 Read (02h): Supported 00:07:24.465 Compare (05h): Supported 00:07:24.465 Write Zeroes (08h): Supported LBA-Change 00:07:24.465 Dataset Management (09h): Supported LBA-Change 00:07:24.465 Unknown (0Ch): Supported 00:07:24.465 Unknown (12h): Supported 00:07:24.465 Copy (19h): Supported LBA-Change 00:07:24.465 Unknown (1Dh): Supported LBA-Change 00:07:24.465 00:07:24.465 Error Log 00:07:24.465 ========= 00:07:24.465 00:07:24.465 Arbitration 00:07:24.465 =========== 00:07:24.465 Arbitration Burst: no limit 00:07:24.465 00:07:24.465 Power Management 00:07:24.465 ================ 00:07:24.465 Number of Power States: 1 00:07:24.465 Current Power State: Power State #0 00:07:24.465 Power State #0: 00:07:24.465 Max Power: 25.00 W 00:07:24.465 Non-Operational State: Operational 00:07:24.465 Entry Latency: 16 microseconds 00:07:24.465 Exit Latency: 4 microseconds 00:07:24.465 Relative Read Throughput: 0 00:07:24.466 Relative Read Latency: 0 00:07:24.466 Relative Write Throughput: 0 00:07:24.466 Relative Write Latency: 0 00:07:24.466 Idle Power: Not Reported 00:07:24.466 Active Power: Not Reported 00:07:24.466 Non-Operational Permissive Mode: Not Supported 00:07:24.466 00:07:24.466 Health Information 00:07:24.466 ================== 00:07:24.466 Critical Warnings: 00:07:24.466 Available Spare Space: OK 00:07:24.466 Temperature: OK 00:07:24.466 Device Reliability: OK 00:07:24.466 Read Only: No 00:07:24.466 Volatile Memory Backup: OK 00:07:24.466 Current Temperature: 323 Kelvin (50 Celsius) 00:07:24.466 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:24.466 Available Spare: 0% 00:07:24.466 Available Spare Threshold: 0% 00:07:24.466 Life Percentage Used: 0% 00:07:24.466 Data Units Read: 2391 00:07:24.466 Data Units Written: 2178 00:07:24.466 Host Read Commands: 117078 00:07:24.466 Host Write Commands: 115349 00:07:24.466 Controller Busy Time: 0 minutes 00:07:24.466 Power Cycles: 0 00:07:24.466 Power On Hours: 0 hours 00:07:24.466 Unsafe Shutdowns: 0 00:07:24.466 Unrecoverable Media Errors: 0 00:07:24.466 Lifetime Error Log Entries: 0 00:07:24.466 Warning Temperature Time: 0 minutes 00:07:24.466 Critical Temperature Time: 0 minutes 00:07:24.466 00:07:24.466 Number of Queues 00:07:24.466 ================ 00:07:24.466 Number of I/O Submission Queues: 64 00:07:24.466 Number of I/O Completion Queues: 64 00:07:24.466 00:07:24.466 ZNS Specific Controller Data 00:07:24.466 ============================ 00:07:24.466 Zone Append Size Limit: 0 00:07:24.466 00:07:24.466 00:07:24.466 Active Namespaces 00:07:24.466 ================= 00:07:24.466 Namespace ID:1 00:07:24.466 Error Recovery Timeout: Unlimited 00:07:24.466 Command Set Identifier: NVM (00h) 00:07:24.466 Deallocate: Supported 00:07:24.466 Deallocated/Unwritten Error: Supported 00:07:24.466 Deallocated Read Value: All 0x00 00:07:24.466 Deallocate in Write Zeroes: Not Supported 00:07:24.466 Deallocated Guard Field: 0xFFFF 00:07:24.466 Flush: Supported 00:07:24.466 Reservation: Not Supported 00:07:24.466 Namespace Sharing Capabilities: Private 00:07:24.466 Size (in LBAs): 1048576 (4GiB) 00:07:24.466 Capacity (in LBAs): 1048576 (4GiB) 00:07:24.466 Utilization (in LBAs): 1048576 (4GiB) 00:07:24.466 Thin Provisioning: Not Supported 00:07:24.466 Per-NS Atomic Units: No 00:07:24.466 Maximum Single Source Range Length: 128 00:07:24.466 Maximum Copy Length: 128 00:07:24.466 Maximum Source Range Count: 128 00:07:24.466 NGUID/EUI64 Never Reused: No 00:07:24.466 Namespace Write Protected: No 00:07:24.466 Number of LBA Formats: 8 00:07:24.466 Current LBA Format: LBA Format #04 00:07:24.466 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.466 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.466 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.466 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.466 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.466 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.466 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.466 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.466 00:07:24.466 NVM Specific Namespace Data 00:07:24.466 =========================== 00:07:24.466 Logical Block Storage Tag Mask: 0 00:07:24.466 Protection Information Capabilities: 00:07:24.466 16b Guard Protection Information Storage Tag Support: No 00:07:24.466 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.466 Storage Tag Check Read Support: No 00:07:24.466 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Namespace ID:2 00:07:24.466 Error Recovery Timeout: Unlimited 00:07:24.466 Command Set Identifier: NVM (00h) 00:07:24.466 Deallocate: Supported 00:07:24.466 Deallocated/Unwritten Error: Supported 00:07:24.466 Deallocated Read Value: All 0x00 00:07:24.466 Deallocate in Write Zeroes: Not Supported 00:07:24.466 Deallocated Guard Field: 0xFFFF 00:07:24.466 Flush: Supported 00:07:24.466 Reservation: Not Supported 00:07:24.466 Namespace Sharing Capabilities: Private 00:07:24.466 Size (in LBAs): 1048576 (4GiB) 00:07:24.466 Capacity (in LBAs): 1048576 (4GiB) 00:07:24.466 Utilization (in LBAs): 1048576 (4GiB) 00:07:24.466 Thin Provisioning: Not Supported 00:07:24.466 Per-NS Atomic Units: No 00:07:24.466 Maximum Single Source Range Length: 128 00:07:24.466 Maximum Copy Length: 128 00:07:24.466 Maximum Source Range Count: 128 00:07:24.466 NGUID/EUI64 Never Reused: No 00:07:24.466 Namespace Write Protected: No 00:07:24.466 Number of LBA Formats: 8 00:07:24.466 Current LBA Format: LBA Format #04 00:07:24.466 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.466 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.466 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.466 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.466 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.466 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.466 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.466 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.466 00:07:24.466 NVM Specific Namespace Data 00:07:24.466 =========================== 00:07:24.466 Logical Block Storage Tag Mask: 0 00:07:24.466 Protection Information Capabilities: 00:07:24.466 16b Guard Protection Information Storage Tag Support: No 00:07:24.466 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.466 Storage Tag Check Read Support: No 00:07:24.466 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.466 Namespace ID:3 00:07:24.466 Error Recovery Timeout: Unlimited 00:07:24.466 Command Set Identifier: NVM (00h) 00:07:24.466 Deallocate: Supported 00:07:24.466 Deallocated/Unwritten Error: Supported 00:07:24.466 Deallocated Read Value: All 0x00 00:07:24.466 Deallocate in Write Zeroes: Not Supported 00:07:24.466 Deallocated Guard Field: 0xFFFF 00:07:24.466 Flush: Supported 00:07:24.466 Reservation: Not Supported 00:07:24.466 Namespace Sharing Capabilities: Private 00:07:24.466 Size (in LBAs): 1048576 (4GiB) 00:07:24.466 Capacity (in LBAs): 1048576 (4GiB) 00:07:24.466 Utilization (in LBAs): 1048576 (4GiB) 00:07:24.466 Thin Provisioning: Not Supported 00:07:24.466 Per-NS Atomic Units: No 00:07:24.466 Maximum Single Source Range Length: 128 00:07:24.466 Maximum Copy Length: 128 00:07:24.466 Maximum Source Range Count: 128 00:07:24.466 NGUID/EUI64 Never Reused: No 00:07:24.466 Namespace Write Protected: No 00:07:24.466 Number of LBA Formats: 8 00:07:24.466 Current LBA Format: LBA Format #04 00:07:24.466 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.466 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.466 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.466 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.466 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.466 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.467 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.467 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.467 00:07:24.467 NVM Specific Namespace Data 00:07:24.467 =========================== 00:07:24.467 Logical Block Storage Tag Mask: 0 00:07:24.467 Protection Information Capabilities: 00:07:24.467 16b Guard Protection Information Storage Tag Support: No 00:07:24.467 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.467 Storage Tag Check Read Support: No 00:07:24.467 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.467 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.467 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.467 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.467 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.467 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.467 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.467 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.467 13:20:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:24.467 13:20:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:24.729 ===================================================== 00:07:24.729 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:24.729 ===================================================== 00:07:24.729 Controller Capabilities/Features 00:07:24.729 ================================ 00:07:24.729 Vendor ID: 1b36 00:07:24.729 Subsystem Vendor ID: 1af4 00:07:24.729 Serial Number: 12343 00:07:24.729 Model Number: QEMU NVMe Ctrl 00:07:24.729 Firmware Version: 8.0.0 00:07:24.729 Recommended Arb Burst: 6 00:07:24.729 IEEE OUI Identifier: 00 54 52 00:07:24.729 Multi-path I/O 00:07:24.729 May have multiple subsystem ports: No 00:07:24.729 May have multiple controllers: Yes 00:07:24.729 Associated with SR-IOV VF: No 00:07:24.729 Max Data Transfer Size: 524288 00:07:24.729 Max Number of Namespaces: 256 00:07:24.729 Max Number of I/O Queues: 64 00:07:24.729 NVMe Specification Version (VS): 1.4 00:07:24.729 NVMe Specification Version (Identify): 1.4 00:07:24.729 Maximum Queue Entries: 2048 00:07:24.729 Contiguous Queues Required: Yes 00:07:24.729 Arbitration Mechanisms Supported 00:07:24.729 Weighted Round Robin: Not Supported 00:07:24.729 Vendor Specific: Not Supported 00:07:24.729 Reset Timeout: 7500 ms 00:07:24.729 Doorbell Stride: 4 bytes 00:07:24.729 NVM Subsystem Reset: Not Supported 00:07:24.729 Command Sets Supported 00:07:24.729 NVM Command Set: Supported 00:07:24.729 Boot Partition: Not Supported 00:07:24.729 Memory Page Size Minimum: 4096 bytes 00:07:24.729 Memory Page Size Maximum: 65536 bytes 00:07:24.729 Persistent Memory Region: Not Supported 00:07:24.729 Optional Asynchronous Events Supported 00:07:24.729 Namespace Attribute Notices: Supported 00:07:24.729 Firmware Activation Notices: Not Supported 00:07:24.729 ANA Change Notices: Not Supported 00:07:24.729 PLE Aggregate Log Change Notices: Not Supported 00:07:24.729 LBA Status Info Alert Notices: Not Supported 00:07:24.729 EGE Aggregate Log Change Notices: Not Supported 00:07:24.729 Normal NVM Subsystem Shutdown event: Not Supported 00:07:24.729 Zone Descriptor Change Notices: Not Supported 00:07:24.729 Discovery Log Change Notices: Not Supported 00:07:24.729 Controller Attributes 00:07:24.729 128-bit Host Identifier: Not Supported 00:07:24.729 Non-Operational Permissive Mode: Not Supported 00:07:24.729 NVM Sets: Not Supported 00:07:24.729 Read Recovery Levels: Not Supported 00:07:24.729 Endurance Groups: Supported 00:07:24.729 Predictable Latency Mode: Not Supported 00:07:24.729 Traffic Based Keep ALive: Not Supported 00:07:24.729 Namespace Granularity: Not Supported 00:07:24.729 SQ Associations: Not Supported 00:07:24.729 UUID List: Not Supported 00:07:24.729 Multi-Domain Subsystem: Not Supported 00:07:24.729 Fixed Capacity Management: Not Supported 00:07:24.729 Variable Capacity Management: Not Supported 00:07:24.729 Delete Endurance Group: Not Supported 00:07:24.729 Delete NVM Set: Not Supported 00:07:24.729 Extended LBA Formats Supported: Supported 00:07:24.729 Flexible Data Placement Supported: Supported 00:07:24.729 00:07:24.729 Controller Memory Buffer Support 00:07:24.729 ================================ 00:07:24.729 Supported: No 00:07:24.729 00:07:24.729 Persistent Memory Region Support 00:07:24.729 ================================ 00:07:24.730 Supported: No 00:07:24.730 00:07:24.730 Admin Command Set Attributes 00:07:24.730 ============================ 00:07:24.730 Security Send/Receive: Not Supported 00:07:24.730 Format NVM: Supported 00:07:24.730 Firmware Activate/Download: Not Supported 00:07:24.730 Namespace Management: Supported 00:07:24.730 Device Self-Test: Not Supported 00:07:24.730 Directives: Supported 00:07:24.730 NVMe-MI: Not Supported 00:07:24.730 Virtualization Management: Not Supported 00:07:24.730 Doorbell Buffer Config: Supported 00:07:24.730 Get LBA Status Capability: Not Supported 00:07:24.730 Command & Feature Lockdown Capability: Not Supported 00:07:24.730 Abort Command Limit: 4 00:07:24.730 Async Event Request Limit: 4 00:07:24.730 Number of Firmware Slots: N/A 00:07:24.730 Firmware Slot 1 Read-Only: N/A 00:07:24.730 Firmware Activation Without Reset: N/A 00:07:24.730 Multiple Update Detection Support: N/A 00:07:24.730 Firmware Update Granularity: No Information Provided 00:07:24.730 Per-Namespace SMART Log: Yes 00:07:24.730 Asymmetric Namespace Access Log Page: Not Supported 00:07:24.730 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:24.730 Command Effects Log Page: Supported 00:07:24.730 Get Log Page Extended Data: Supported 00:07:24.730 Telemetry Log Pages: Not Supported 00:07:24.730 Persistent Event Log Pages: Not Supported 00:07:24.730 Supported Log Pages Log Page: May Support 00:07:24.730 Commands Supported & Effects Log Page: Not Supported 00:07:24.730 Feature Identifiers & Effects Log Page:May Support 00:07:24.730 NVMe-MI Commands & Effects Log Page: May Support 00:07:24.730 Data Area 4 for Telemetry Log: Not Supported 00:07:24.730 Error Log Page Entries Supported: 1 00:07:24.730 Keep Alive: Not Supported 00:07:24.730 00:07:24.730 NVM Command Set Attributes 00:07:24.730 ========================== 00:07:24.730 Submission Queue Entry Size 00:07:24.730 Max: 64 00:07:24.730 Min: 64 00:07:24.730 Completion Queue Entry Size 00:07:24.730 Max: 16 00:07:24.730 Min: 16 00:07:24.730 Number of Namespaces: 256 00:07:24.730 Compare Command: Supported 00:07:24.730 Write Uncorrectable Command: Not Supported 00:07:24.730 Dataset Management Command: Supported 00:07:24.730 Write Zeroes Command: Supported 00:07:24.730 Set Features Save Field: Supported 00:07:24.730 Reservations: Not Supported 00:07:24.730 Timestamp: Supported 00:07:24.730 Copy: Supported 00:07:24.730 Volatile Write Cache: Present 00:07:24.730 Atomic Write Unit (Normal): 1 00:07:24.730 Atomic Write Unit (PFail): 1 00:07:24.730 Atomic Compare & Write Unit: 1 00:07:24.730 Fused Compare & Write: Not Supported 00:07:24.730 Scatter-Gather List 00:07:24.730 SGL Command Set: Supported 00:07:24.730 SGL Keyed: Not Supported 00:07:24.730 SGL Bit Bucket Descriptor: Not Supported 00:07:24.730 SGL Metadata Pointer: Not Supported 00:07:24.730 Oversized SGL: Not Supported 00:07:24.730 SGL Metadata Address: Not Supported 00:07:24.730 SGL Offset: Not Supported 00:07:24.730 Transport SGL Data Block: Not Supported 00:07:24.730 Replay Protected Memory Block: Not Supported 00:07:24.730 00:07:24.730 Firmware Slot Information 00:07:24.730 ========================= 00:07:24.730 Active slot: 1 00:07:24.730 Slot 1 Firmware Revision: 1.0 00:07:24.730 00:07:24.730 00:07:24.730 Commands Supported and Effects 00:07:24.730 ============================== 00:07:24.730 Admin Commands 00:07:24.730 -------------- 00:07:24.730 Delete I/O Submission Queue (00h): Supported 00:07:24.730 Create I/O Submission Queue (01h): Supported 00:07:24.730 Get Log Page (02h): Supported 00:07:24.730 Delete I/O Completion Queue (04h): Supported 00:07:24.730 Create I/O Completion Queue (05h): Supported 00:07:24.730 Identify (06h): Supported 00:07:24.730 Abort (08h): Supported 00:07:24.730 Set Features (09h): Supported 00:07:24.730 Get Features (0Ah): Supported 00:07:24.730 Asynchronous Event Request (0Ch): Supported 00:07:24.730 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:24.730 Directive Send (19h): Supported 00:07:24.730 Directive Receive (1Ah): Supported 00:07:24.730 Virtualization Management (1Ch): Supported 00:07:24.730 Doorbell Buffer Config (7Ch): Supported 00:07:24.730 Format NVM (80h): Supported LBA-Change 00:07:24.730 I/O Commands 00:07:24.730 ------------ 00:07:24.730 Flush (00h): Supported LBA-Change 00:07:24.730 Write (01h): Supported LBA-Change 00:07:24.730 Read (02h): Supported 00:07:24.730 Compare (05h): Supported 00:07:24.730 Write Zeroes (08h): Supported LBA-Change 00:07:24.730 Dataset Management (09h): Supported LBA-Change 00:07:24.730 Unknown (0Ch): Supported 00:07:24.730 Unknown (12h): Supported 00:07:24.730 Copy (19h): Supported LBA-Change 00:07:24.730 Unknown (1Dh): Supported LBA-Change 00:07:24.730 00:07:24.730 Error Log 00:07:24.730 ========= 00:07:24.730 00:07:24.730 Arbitration 00:07:24.730 =========== 00:07:24.730 Arbitration Burst: no limit 00:07:24.730 00:07:24.730 Power Management 00:07:24.730 ================ 00:07:24.730 Number of Power States: 1 00:07:24.730 Current Power State: Power State #0 00:07:24.730 Power State #0: 00:07:24.730 Max Power: 25.00 W 00:07:24.730 Non-Operational State: Operational 00:07:24.730 Entry Latency: 16 microseconds 00:07:24.730 Exit Latency: 4 microseconds 00:07:24.730 Relative Read Throughput: 0 00:07:24.730 Relative Read Latency: 0 00:07:24.730 Relative Write Throughput: 0 00:07:24.730 Relative Write Latency: 0 00:07:24.730 Idle Power: Not Reported 00:07:24.730 Active Power: Not Reported 00:07:24.730 Non-Operational Permissive Mode: Not Supported 00:07:24.730 00:07:24.730 Health Information 00:07:24.730 ================== 00:07:24.730 Critical Warnings: 00:07:24.730 Available Spare Space: OK 00:07:24.730 Temperature: OK 00:07:24.730 Device Reliability: OK 00:07:24.730 Read Only: No 00:07:24.730 Volatile Memory Backup: OK 00:07:24.730 Current Temperature: 323 Kelvin (50 Celsius) 00:07:24.730 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:24.730 Available Spare: 0% 00:07:24.730 Available Spare Threshold: 0% 00:07:24.730 Life Percentage Used: 0% 00:07:24.730 Data Units Read: 905 00:07:24.730 Data Units Written: 834 00:07:24.730 Host Read Commands: 39970 00:07:24.730 Host Write Commands: 39393 00:07:24.730 Controller Busy Time: 0 minutes 00:07:24.730 Power Cycles: 0 00:07:24.730 Power On Hours: 0 hours 00:07:24.730 Unsafe Shutdowns: 0 00:07:24.730 Unrecoverable Media Errors: 0 00:07:24.730 Lifetime Error Log Entries: 0 00:07:24.730 Warning Temperature Time: 0 minutes 00:07:24.730 Critical Temperature Time: 0 minutes 00:07:24.730 00:07:24.730 Number of Queues 00:07:24.730 ================ 00:07:24.730 Number of I/O Submission Queues: 64 00:07:24.730 Number of I/O Completion Queues: 64 00:07:24.730 00:07:24.730 ZNS Specific Controller Data 00:07:24.730 ============================ 00:07:24.730 Zone Append Size Limit: 0 00:07:24.730 00:07:24.730 00:07:24.730 Active Namespaces 00:07:24.730 ================= 00:07:24.730 Namespace ID:1 00:07:24.730 Error Recovery Timeout: Unlimited 00:07:24.730 Command Set Identifier: NVM (00h) 00:07:24.730 Deallocate: Supported 00:07:24.730 Deallocated/Unwritten Error: Supported 00:07:24.730 Deallocated Read Value: All 0x00 00:07:24.730 Deallocate in Write Zeroes: Not Supported 00:07:24.730 Deallocated Guard Field: 0xFFFF 00:07:24.730 Flush: Supported 00:07:24.730 Reservation: Not Supported 00:07:24.730 Namespace Sharing Capabilities: Multiple Controllers 00:07:24.730 Size (in LBAs): 262144 (1GiB) 00:07:24.730 Capacity (in LBAs): 262144 (1GiB) 00:07:24.730 Utilization (in LBAs): 262144 (1GiB) 00:07:24.730 Thin Provisioning: Not Supported 00:07:24.730 Per-NS Atomic Units: No 00:07:24.730 Maximum Single Source Range Length: 128 00:07:24.730 Maximum Copy Length: 128 00:07:24.730 Maximum Source Range Count: 128 00:07:24.730 NGUID/EUI64 Never Reused: No 00:07:24.730 Namespace Write Protected: No 00:07:24.730 Endurance group ID: 1 00:07:24.730 Number of LBA Formats: 8 00:07:24.730 Current LBA Format: LBA Format #04 00:07:24.730 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.730 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.730 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.730 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.730 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.730 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.730 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.730 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.730 00:07:24.730 Get Feature FDP: 00:07:24.730 ================ 00:07:24.730 Enabled: Yes 00:07:24.730 FDP configuration index: 0 00:07:24.730 00:07:24.731 FDP configurations log page 00:07:24.731 =========================== 00:07:24.731 Number of FDP configurations: 1 00:07:24.731 Version: 0 00:07:24.731 Size: 112 00:07:24.731 FDP Configuration Descriptor: 0 00:07:24.731 Descriptor Size: 96 00:07:24.731 Reclaim Group Identifier format: 2 00:07:24.731 FDP Volatile Write Cache: Not Present 00:07:24.731 FDP Configuration: Valid 00:07:24.731 Vendor Specific Size: 0 00:07:24.731 Number of Reclaim Groups: 2 00:07:24.731 Number of Recalim Unit Handles: 8 00:07:24.731 Max Placement Identifiers: 128 00:07:24.731 Number of Namespaces Suppprted: 256 00:07:24.731 Reclaim unit Nominal Size: 6000000 bytes 00:07:24.731 Estimated Reclaim Unit Time Limit: Not Reported 00:07:24.731 RUH Desc #000: RUH Type: Initially Isolated 00:07:24.731 RUH Desc #001: RUH Type: Initially Isolated 00:07:24.731 RUH Desc #002: RUH Type: Initially Isolated 00:07:24.731 RUH Desc #003: RUH Type: Initially Isolated 00:07:24.731 RUH Desc #004: RUH Type: Initially Isolated 00:07:24.731 RUH Desc #005: RUH Type: Initially Isolated 00:07:24.731 RUH Desc #006: RUH Type: Initially Isolated 00:07:24.731 RUH Desc #007: RUH Type: Initially Isolated 00:07:24.731 00:07:24.731 FDP reclaim unit handle usage log page 00:07:24.731 ====================================== 00:07:24.731 Number of Reclaim Unit Handles: 8 00:07:24.731 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:24.731 RUH Usage Desc #001: RUH Attributes: Unused 00:07:24.731 RUH Usage Desc #002: RUH Attributes: Unused 00:07:24.731 RUH Usage Desc #003: RUH Attributes: Unused 00:07:24.731 RUH Usage Desc #004: RUH Attributes: Unused 00:07:24.731 RUH Usage Desc #005: RUH Attributes: Unused 00:07:24.731 RUH Usage Desc #006: RUH Attributes: Unused 00:07:24.731 RUH Usage Desc #007: RUH Attributes: Unused 00:07:24.731 00:07:24.731 FDP statistics log page 00:07:24.731 ======================= 00:07:24.731 Host bytes with metadata written: 539140096 00:07:24.731 Media bytes with metadata written: 539197440 00:07:24.731 Media bytes erased: 0 00:07:24.731 00:07:24.731 FDP events log page 00:07:24.731 =================== 00:07:24.731 Number of FDP events: 0 00:07:24.731 00:07:24.731 NVM Specific Namespace Data 00:07:24.731 =========================== 00:07:24.731 Logical Block Storage Tag Mask: 0 00:07:24.731 Protection Information Capabilities: 00:07:24.731 16b Guard Protection Information Storage Tag Support: No 00:07:24.731 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.731 Storage Tag Check Read Support: No 00:07:24.731 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.731 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.731 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.731 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.731 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.731 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.731 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.731 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.731 00:07:24.731 real 0m1.139s 00:07:24.731 user 0m0.441s 00:07:24.731 sys 0m0.494s 00:07:24.731 13:20:13 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.731 13:20:13 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:24.731 ************************************ 00:07:24.731 END TEST nvme_identify 00:07:24.731 ************************************ 00:07:24.731 13:20:13 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:24.731 13:20:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.731 13:20:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.731 13:20:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.731 ************************************ 00:07:24.731 START TEST nvme_perf 00:07:24.731 ************************************ 00:07:24.731 13:20:13 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:24.731 13:20:13 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:26.122 Initializing NVMe Controllers 00:07:26.122 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:26.122 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:26.122 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:26.122 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:26.122 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:26.122 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:26.122 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:26.122 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:26.122 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:26.122 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:26.122 Initialization complete. Launching workers. 00:07:26.122 ======================================================== 00:07:26.122 Latency(us) 00:07:26.122 Device Information : IOPS MiB/s Average min max 00:07:26.122 PCIE (0000:00:10.0) NSID 1 from core 0: 16021.54 187.75 7999.20 6485.47 41365.18 00:07:26.122 PCIE (0000:00:11.0) NSID 1 from core 0: 16021.54 187.75 7988.30 6500.15 39613.95 00:07:26.122 PCIE (0000:00:13.0) NSID 1 from core 0: 16021.54 187.75 7976.33 6396.90 38415.93 00:07:26.122 PCIE (0000:00:12.0) NSID 1 from core 0: 16021.54 187.75 7963.76 6461.36 36695.13 00:07:26.122 PCIE (0000:00:12.0) NSID 2 from core 0: 16021.54 187.75 7951.50 6529.55 35047.09 00:07:26.122 PCIE (0000:00:12.0) NSID 3 from core 0: 16085.37 188.50 7907.84 6535.51 29511.42 00:07:26.122 ======================================================== 00:07:26.122 Total : 96193.09 1127.26 7964.45 6396.90 41365.18 00:07:26.122 00:07:26.122 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:26.122 ================================================================================= 00:07:26.122 1.00000% : 6755.249us 00:07:26.122 10.00000% : 7158.548us 00:07:26.122 25.00000% : 7360.197us 00:07:26.122 50.00000% : 7662.671us 00:07:26.122 75.00000% : 7965.145us 00:07:26.122 90.00000% : 8318.031us 00:07:26.122 95.00000% : 9225.452us 00:07:26.122 98.00000% : 10989.883us 00:07:26.122 99.00000% : 12703.902us 00:07:26.122 99.50000% : 34885.317us 00:07:26.122 99.90000% : 41136.443us 00:07:26.122 99.99000% : 41338.092us 00:07:26.122 99.99900% : 41539.742us 00:07:26.122 99.99990% : 41539.742us 00:07:26.122 99.99999% : 41539.742us 00:07:26.122 00:07:26.122 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:26.122 ================================================================================= 00:07:26.122 1.00000% : 6805.662us 00:07:26.122 10.00000% : 7208.960us 00:07:26.122 25.00000% : 7410.609us 00:07:26.122 50.00000% : 7662.671us 00:07:26.122 75.00000% : 7914.732us 00:07:26.122 90.00000% : 8267.618us 00:07:26.122 95.00000% : 9175.040us 00:07:26.122 98.00000% : 11191.532us 00:07:26.122 99.00000% : 12451.840us 00:07:26.122 99.50000% : 33877.071us 00:07:26.122 99.90000% : 39321.600us 00:07:26.122 99.99000% : 39724.898us 00:07:26.122 99.99900% : 39724.898us 00:07:26.122 99.99990% : 39724.898us 00:07:26.122 99.99999% : 39724.898us 00:07:26.122 00:07:26.122 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:26.122 ================================================================================= 00:07:26.122 1.00000% : 6856.074us 00:07:26.122 10.00000% : 7208.960us 00:07:26.122 25.00000% : 7410.609us 00:07:26.122 50.00000% : 7662.671us 00:07:26.122 75.00000% : 7965.145us 00:07:26.122 90.00000% : 8267.618us 00:07:26.122 95.00000% : 9074.215us 00:07:26.122 98.00000% : 11393.182us 00:07:26.122 99.00000% : 12250.191us 00:07:26.122 99.50000% : 33272.123us 00:07:26.122 99.90000% : 38111.705us 00:07:26.122 99.99000% : 38515.003us 00:07:26.122 99.99900% : 38515.003us 00:07:26.122 99.99990% : 38515.003us 00:07:26.122 99.99999% : 38515.003us 00:07:26.122 00:07:26.122 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:26.122 ================================================================================= 00:07:26.122 1.00000% : 6856.074us 00:07:26.122 10.00000% : 7208.960us 00:07:26.122 25.00000% : 7410.609us 00:07:26.122 50.00000% : 7662.671us 00:07:26.122 75.00000% : 7965.145us 00:07:26.122 90.00000% : 8267.618us 00:07:26.122 95.00000% : 9175.040us 00:07:26.122 98.00000% : 11141.120us 00:07:26.122 99.00000% : 12300.603us 00:07:26.122 99.50000% : 31457.280us 00:07:26.122 99.90000% : 36498.511us 00:07:26.122 99.99000% : 36700.160us 00:07:26.122 99.99900% : 36700.160us 00:07:26.122 99.99990% : 36700.160us 00:07:26.122 99.99999% : 36700.160us 00:07:26.122 00:07:26.122 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:26.122 ================================================================================= 00:07:26.122 1.00000% : 6856.074us 00:07:26.122 10.00000% : 7208.960us 00:07:26.122 25.00000% : 7410.609us 00:07:26.122 50.00000% : 7662.671us 00:07:26.122 75.00000% : 7965.145us 00:07:26.122 90.00000% : 8267.618us 00:07:26.122 95.00000% : 9376.689us 00:07:26.122 98.00000% : 10838.646us 00:07:26.122 99.00000% : 12603.077us 00:07:26.122 99.50000% : 29844.086us 00:07:26.122 99.90000% : 34683.668us 00:07:26.122 99.99000% : 35086.966us 00:07:26.122 99.99900% : 35086.966us 00:07:26.122 99.99990% : 35086.966us 00:07:26.122 99.99999% : 35086.966us 00:07:26.122 00:07:26.122 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:26.122 ================================================================================= 00:07:26.122 1.00000% : 6856.074us 00:07:26.122 10.00000% : 7208.960us 00:07:26.122 25.00000% : 7410.609us 00:07:26.122 50.00000% : 7662.671us 00:07:26.122 75.00000% : 7965.145us 00:07:26.122 90.00000% : 8267.618us 00:07:26.122 95.00000% : 9427.102us 00:07:26.122 98.00000% : 10687.409us 00:07:26.122 99.00000% : 12603.077us 00:07:26.122 99.50000% : 22483.889us 00:07:26.122 99.90000% : 29239.138us 00:07:26.122 99.99000% : 29642.437us 00:07:26.122 99.99900% : 29642.437us 00:07:26.122 99.99990% : 29642.437us 00:07:26.122 99.99999% : 29642.437us 00:07:26.122 00:07:26.122 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:26.122 ============================================================================== 00:07:26.122 Range in us Cumulative IO count 00:07:26.122 6452.775 - 6503.188: 0.0560% ( 9) 00:07:26.122 6503.188 - 6553.600: 0.2054% ( 24) 00:07:26.122 6553.600 - 6604.012: 0.3673% ( 26) 00:07:26.122 6604.012 - 6654.425: 0.5727% ( 33) 00:07:26.122 6654.425 - 6704.837: 0.7532% ( 29) 00:07:26.122 6704.837 - 6755.249: 1.0085% ( 41) 00:07:26.122 6755.249 - 6805.662: 1.3197% ( 50) 00:07:26.122 6805.662 - 6856.074: 1.8053% ( 78) 00:07:26.122 6856.074 - 6906.486: 2.4714% ( 107) 00:07:26.122 6906.486 - 6956.898: 3.4300% ( 154) 00:07:26.122 6956.898 - 7007.311: 4.6501% ( 196) 00:07:26.122 7007.311 - 7057.723: 6.2438% ( 256) 00:07:26.122 7057.723 - 7108.135: 8.3541% ( 339) 00:07:26.122 7108.135 - 7158.548: 11.1429% ( 448) 00:07:26.122 7158.548 - 7208.960: 14.4422% ( 530) 00:07:26.122 7208.960 - 7259.372: 18.0403% ( 578) 00:07:26.122 7259.372 - 7309.785: 21.8937% ( 619) 00:07:26.122 7309.785 - 7360.197: 25.6848% ( 609) 00:07:26.122 7360.197 - 7410.609: 29.7249% ( 649) 00:07:26.122 7410.609 - 7461.022: 33.9517% ( 679) 00:07:26.123 7461.022 - 7511.434: 38.0914% ( 665) 00:07:26.123 7511.434 - 7561.846: 42.3058% ( 677) 00:07:26.123 7561.846 - 7612.258: 46.5824% ( 687) 00:07:26.123 7612.258 - 7662.671: 50.8777% ( 690) 00:07:26.123 7662.671 - 7713.083: 55.1295% ( 683) 00:07:26.123 7713.083 - 7763.495: 59.3376% ( 676) 00:07:26.123 7763.495 - 7813.908: 63.6579% ( 694) 00:07:26.123 7813.908 - 7864.320: 67.7229% ( 653) 00:07:26.123 7864.320 - 7914.732: 71.5326% ( 612) 00:07:26.123 7914.732 - 7965.145: 75.4482% ( 629) 00:07:26.123 7965.145 - 8015.557: 78.9031% ( 555) 00:07:26.123 8015.557 - 8065.969: 81.9783% ( 494) 00:07:26.123 8065.969 - 8116.382: 84.6738% ( 433) 00:07:26.123 8116.382 - 8166.794: 86.7966% ( 341) 00:07:26.123 8166.794 - 8217.206: 88.2595% ( 235) 00:07:26.123 8217.206 - 8267.618: 89.4422% ( 190) 00:07:26.123 8267.618 - 8318.031: 90.3573% ( 147) 00:07:26.123 8318.031 - 8368.443: 91.0981% ( 119) 00:07:26.123 8368.443 - 8418.855: 91.6708% ( 92) 00:07:26.123 8418.855 - 8469.268: 92.1501% ( 77) 00:07:26.123 8469.268 - 8519.680: 92.5174% ( 59) 00:07:26.123 8519.680 - 8570.092: 92.8038% ( 46) 00:07:26.123 8570.092 - 8620.505: 93.1337% ( 53) 00:07:26.123 8620.505 - 8670.917: 93.3454% ( 34) 00:07:26.123 8670.917 - 8721.329: 93.5944% ( 40) 00:07:26.123 8721.329 - 8771.742: 93.7936% ( 32) 00:07:26.123 8771.742 - 8822.154: 93.9617% ( 27) 00:07:26.123 8822.154 - 8872.566: 94.1484% ( 30) 00:07:26.123 8872.566 - 8922.978: 94.3040% ( 25) 00:07:26.123 8922.978 - 8973.391: 94.4348% ( 21) 00:07:26.123 8973.391 - 9023.803: 94.5593% ( 20) 00:07:26.123 9023.803 - 9074.215: 94.6526% ( 15) 00:07:26.123 9074.215 - 9124.628: 94.7896% ( 22) 00:07:26.123 9124.628 - 9175.040: 94.8892% ( 16) 00:07:26.123 9175.040 - 9225.452: 95.0137% ( 20) 00:07:26.123 9225.452 - 9275.865: 95.1008% ( 14) 00:07:26.123 9275.865 - 9326.277: 95.1942% ( 15) 00:07:26.123 9326.277 - 9376.689: 95.3000% ( 17) 00:07:26.123 9376.689 - 9427.102: 95.3934% ( 15) 00:07:26.123 9427.102 - 9477.514: 95.4557% ( 10) 00:07:26.123 9477.514 - 9527.926: 95.5740% ( 19) 00:07:26.123 9527.926 - 9578.338: 95.6673% ( 15) 00:07:26.123 9578.338 - 9628.751: 95.7420% ( 12) 00:07:26.123 9628.751 - 9679.163: 95.8354% ( 15) 00:07:26.123 9679.163 - 9729.575: 95.9163% ( 13) 00:07:26.123 9729.575 - 9779.988: 95.9973% ( 13) 00:07:26.123 9779.988 - 9830.400: 96.0782% ( 13) 00:07:26.123 9830.400 - 9880.812: 96.1467% ( 11) 00:07:26.123 9880.812 - 9931.225: 96.2214% ( 12) 00:07:26.123 9931.225 - 9981.637: 96.3147% ( 15) 00:07:26.123 9981.637 - 10032.049: 96.4143% ( 16) 00:07:26.123 10032.049 - 10082.462: 96.5015% ( 14) 00:07:26.123 10082.462 - 10132.874: 96.6198% ( 19) 00:07:26.123 10132.874 - 10183.286: 96.7007% ( 13) 00:07:26.123 10183.286 - 10233.698: 96.7941% ( 15) 00:07:26.123 10233.698 - 10284.111: 96.8812% ( 14) 00:07:26.123 10284.111 - 10334.523: 96.9933% ( 18) 00:07:26.123 10334.523 - 10384.935: 97.0555% ( 10) 00:07:26.123 10384.935 - 10435.348: 97.1427% ( 14) 00:07:26.123 10435.348 - 10485.760: 97.2485% ( 17) 00:07:26.123 10485.760 - 10536.172: 97.3481% ( 16) 00:07:26.123 10536.172 - 10586.585: 97.4353% ( 14) 00:07:26.123 10586.585 - 10636.997: 97.5286% ( 15) 00:07:26.123 10636.997 - 10687.409: 97.6096% ( 13) 00:07:26.123 10687.409 - 10737.822: 97.7092% ( 16) 00:07:26.123 10737.822 - 10788.234: 97.7714% ( 10) 00:07:26.123 10788.234 - 10838.646: 97.8523% ( 13) 00:07:26.123 10838.646 - 10889.058: 97.9395% ( 14) 00:07:26.123 10889.058 - 10939.471: 97.9955% ( 9) 00:07:26.123 10939.471 - 10989.883: 98.0764% ( 13) 00:07:26.123 10989.883 - 11040.295: 98.1574% ( 13) 00:07:26.123 11040.295 - 11090.708: 98.2134% ( 9) 00:07:26.123 11090.708 - 11141.120: 98.2383% ( 4) 00:07:26.123 11141.120 - 11191.532: 98.2819% ( 7) 00:07:26.123 11191.532 - 11241.945: 98.3068% ( 4) 00:07:26.123 11241.945 - 11292.357: 98.3503% ( 7) 00:07:26.123 11292.357 - 11342.769: 98.3690% ( 3) 00:07:26.123 11342.769 - 11393.182: 98.3877% ( 3) 00:07:26.123 11393.182 - 11443.594: 98.4064% ( 3) 00:07:26.123 11645.243 - 11695.655: 98.4437% ( 6) 00:07:26.123 11695.655 - 11746.068: 98.4624% ( 3) 00:07:26.123 11746.068 - 11796.480: 98.4749% ( 2) 00:07:26.123 11796.480 - 11846.892: 98.4935% ( 3) 00:07:26.123 11846.892 - 11897.305: 98.5122% ( 3) 00:07:26.123 11897.305 - 11947.717: 98.5309% ( 3) 00:07:26.123 11947.717 - 11998.129: 98.5433% ( 2) 00:07:26.123 11998.129 - 12048.542: 98.5558% ( 2) 00:07:26.123 12048.542 - 12098.954: 98.5807% ( 4) 00:07:26.123 12098.954 - 12149.366: 98.6305% ( 8) 00:07:26.123 12149.366 - 12199.778: 98.6616% ( 5) 00:07:26.123 12199.778 - 12250.191: 98.6927% ( 5) 00:07:26.123 12250.191 - 12300.603: 98.7363% ( 7) 00:07:26.123 12300.603 - 12351.015: 98.7674% ( 5) 00:07:26.123 12351.015 - 12401.428: 98.8110% ( 7) 00:07:26.123 12401.428 - 12451.840: 98.8484% ( 6) 00:07:26.123 12451.840 - 12502.252: 98.8857% ( 6) 00:07:26.123 12502.252 - 12552.665: 98.9168% ( 5) 00:07:26.123 12552.665 - 12603.077: 98.9542% ( 6) 00:07:26.123 12603.077 - 12653.489: 98.9978% ( 7) 00:07:26.123 12653.489 - 12703.902: 99.0227% ( 4) 00:07:26.123 12703.902 - 12754.314: 99.0538% ( 5) 00:07:26.123 12754.314 - 12804.726: 99.0787% ( 4) 00:07:26.123 12804.726 - 12855.138: 99.0974% ( 3) 00:07:26.123 12855.138 - 12905.551: 99.1160% ( 3) 00:07:26.123 12905.551 - 13006.375: 99.1534% ( 6) 00:07:26.123 13006.375 - 13107.200: 99.1907% ( 6) 00:07:26.123 13107.200 - 13208.025: 99.2032% ( 2) 00:07:26.123 33473.772 - 33675.422: 99.2156% ( 2) 00:07:26.123 33675.422 - 33877.071: 99.2654% ( 8) 00:07:26.123 33877.071 - 34078.720: 99.3152% ( 8) 00:07:26.123 34078.720 - 34280.369: 99.3588% ( 7) 00:07:26.123 34280.369 - 34482.018: 99.4211% ( 10) 00:07:26.123 34482.018 - 34683.668: 99.4646% ( 7) 00:07:26.123 34683.668 - 34885.317: 99.5082% ( 7) 00:07:26.123 34885.317 - 35086.966: 99.5580% ( 8) 00:07:26.123 35086.966 - 35288.615: 99.6016% ( 7) 00:07:26.123 39724.898 - 39926.548: 99.6452% ( 7) 00:07:26.123 39926.548 - 40128.197: 99.6950% ( 8) 00:07:26.123 40128.197 - 40329.846: 99.7448% ( 8) 00:07:26.123 40329.846 - 40531.495: 99.7946% ( 8) 00:07:26.123 40531.495 - 40733.145: 99.8444% ( 8) 00:07:26.123 40733.145 - 40934.794: 99.8942% ( 8) 00:07:26.123 40934.794 - 41136.443: 99.9440% ( 8) 00:07:26.123 41136.443 - 41338.092: 99.9938% ( 8) 00:07:26.123 41338.092 - 41539.742: 100.0000% ( 1) 00:07:26.123 00:07:26.123 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:26.123 ============================================================================== 00:07:26.123 Range in us Cumulative IO count 00:07:26.123 6452.775 - 6503.188: 0.0062% ( 1) 00:07:26.123 6503.188 - 6553.600: 0.0809% ( 12) 00:07:26.123 6553.600 - 6604.012: 0.1307% ( 8) 00:07:26.123 6604.012 - 6654.425: 0.3237% ( 31) 00:07:26.123 6654.425 - 6704.837: 0.5789% ( 41) 00:07:26.123 6704.837 - 6755.249: 0.7968% ( 35) 00:07:26.123 6755.249 - 6805.662: 1.0147% ( 35) 00:07:26.123 6805.662 - 6856.074: 1.3135% ( 48) 00:07:26.123 6856.074 - 6906.486: 1.7430% ( 69) 00:07:26.123 6906.486 - 6956.898: 2.4153% ( 108) 00:07:26.123 6956.898 - 7007.311: 3.2931% ( 141) 00:07:26.123 7007.311 - 7057.723: 4.4821% ( 191) 00:07:26.123 7057.723 - 7108.135: 6.0881% ( 258) 00:07:26.123 7108.135 - 7158.548: 8.1736% ( 335) 00:07:26.123 7158.548 - 7208.960: 10.9624% ( 448) 00:07:26.123 7208.960 - 7259.372: 14.4671% ( 563) 00:07:26.123 7259.372 - 7309.785: 18.5881% ( 662) 00:07:26.123 7309.785 - 7360.197: 22.8523% ( 685) 00:07:26.123 7360.197 - 7410.609: 27.3469% ( 722) 00:07:26.123 7410.609 - 7461.022: 32.1277% ( 768) 00:07:26.123 7461.022 - 7511.434: 36.9522% ( 775) 00:07:26.123 7511.434 - 7561.846: 41.8015% ( 779) 00:07:26.123 7561.846 - 7612.258: 46.8065% ( 804) 00:07:26.123 7612.258 - 7662.671: 51.7617% ( 796) 00:07:26.123 7662.671 - 7713.083: 56.6920% ( 792) 00:07:26.123 7713.083 - 7763.495: 61.5600% ( 782) 00:07:26.123 7763.495 - 7813.908: 66.2786% ( 758) 00:07:26.123 7813.908 - 7864.320: 70.9039% ( 743) 00:07:26.123 7864.320 - 7914.732: 75.0996% ( 674) 00:07:26.123 7914.732 - 7965.145: 78.8907% ( 609) 00:07:26.123 7965.145 - 8015.557: 82.1900% ( 530) 00:07:26.123 8015.557 - 8065.969: 84.8855% ( 433) 00:07:26.123 8065.969 - 8116.382: 86.8152% ( 310) 00:07:26.123 8116.382 - 8166.794: 88.3653% ( 249) 00:07:26.123 8166.794 - 8217.206: 89.4983% ( 182) 00:07:26.123 8217.206 - 8267.618: 90.3884% ( 143) 00:07:26.123 8267.618 - 8318.031: 91.1292% ( 119) 00:07:26.123 8318.031 - 8368.443: 91.7331% ( 97) 00:07:26.123 8368.443 - 8418.855: 92.2435% ( 82) 00:07:26.123 8418.855 - 8469.268: 92.6544% ( 66) 00:07:26.123 8469.268 - 8519.680: 93.0217% ( 59) 00:07:26.123 8519.680 - 8570.092: 93.3454% ( 52) 00:07:26.123 8570.092 - 8620.505: 93.6006% ( 41) 00:07:26.123 8620.505 - 8670.917: 93.8434% ( 39) 00:07:26.123 8670.917 - 8721.329: 93.9990% ( 25) 00:07:26.123 8721.329 - 8771.742: 94.1360% ( 22) 00:07:26.123 8771.742 - 8822.154: 94.2605% ( 20) 00:07:26.123 8822.154 - 8872.566: 94.3601% ( 16) 00:07:26.123 8872.566 - 8922.978: 94.4783% ( 19) 00:07:26.123 8922.978 - 8973.391: 94.5779% ( 16) 00:07:26.123 8973.391 - 9023.803: 94.6900% ( 18) 00:07:26.123 9023.803 - 9074.215: 94.7958% ( 17) 00:07:26.123 9074.215 - 9124.628: 94.9141% ( 19) 00:07:26.123 9124.628 - 9175.040: 95.0448% ( 21) 00:07:26.124 9175.040 - 9225.452: 95.1569% ( 18) 00:07:26.124 9225.452 - 9275.865: 95.2751% ( 19) 00:07:26.124 9275.865 - 9326.277: 95.3872% ( 18) 00:07:26.124 9326.277 - 9376.689: 95.4868% ( 16) 00:07:26.124 9376.689 - 9427.102: 95.5802% ( 15) 00:07:26.124 9427.102 - 9477.514: 95.6673% ( 14) 00:07:26.124 9477.514 - 9527.926: 95.7732% ( 17) 00:07:26.124 9527.926 - 9578.338: 95.8416% ( 11) 00:07:26.124 9578.338 - 9628.751: 95.9101% ( 11) 00:07:26.124 9628.751 - 9679.163: 95.9786% ( 11) 00:07:26.124 9679.163 - 9729.575: 96.0533% ( 12) 00:07:26.124 9729.575 - 9779.988: 96.1031% ( 8) 00:07:26.124 9779.988 - 9830.400: 96.1467% ( 7) 00:07:26.124 9830.400 - 9880.812: 96.1965% ( 8) 00:07:26.124 9880.812 - 9931.225: 96.2400% ( 7) 00:07:26.124 9931.225 - 9981.637: 96.2649% ( 4) 00:07:26.124 9981.637 - 10032.049: 96.2898% ( 4) 00:07:26.124 10032.049 - 10082.462: 96.3210% ( 5) 00:07:26.124 10082.462 - 10132.874: 96.3645% ( 7) 00:07:26.124 10132.874 - 10183.286: 96.4143% ( 8) 00:07:26.124 10183.286 - 10233.698: 96.4828% ( 11) 00:07:26.124 10233.698 - 10284.111: 96.5451% ( 10) 00:07:26.124 10284.111 - 10334.523: 96.5949% ( 8) 00:07:26.124 10334.523 - 10384.935: 96.6509% ( 9) 00:07:26.124 10384.935 - 10435.348: 96.6945% ( 7) 00:07:26.124 10435.348 - 10485.760: 96.7380% ( 7) 00:07:26.124 10485.760 - 10536.172: 96.8003% ( 10) 00:07:26.124 10536.172 - 10586.585: 96.8625% ( 10) 00:07:26.124 10586.585 - 10636.997: 96.9373% ( 12) 00:07:26.124 10636.997 - 10687.409: 97.0431% ( 17) 00:07:26.124 10687.409 - 10737.822: 97.1551% ( 18) 00:07:26.124 10737.822 - 10788.234: 97.2610% ( 17) 00:07:26.124 10788.234 - 10838.646: 97.3730% ( 18) 00:07:26.124 10838.646 - 10889.058: 97.4788% ( 17) 00:07:26.124 10889.058 - 10939.471: 97.6096% ( 21) 00:07:26.124 10939.471 - 10989.883: 97.7154% ( 17) 00:07:26.124 10989.883 - 11040.295: 97.8088% ( 15) 00:07:26.124 11040.295 - 11090.708: 97.8835% ( 12) 00:07:26.124 11090.708 - 11141.120: 97.9457% ( 10) 00:07:26.124 11141.120 - 11191.532: 98.0204% ( 12) 00:07:26.124 11191.532 - 11241.945: 98.0951% ( 12) 00:07:26.124 11241.945 - 11292.357: 98.1574% ( 10) 00:07:26.124 11292.357 - 11342.769: 98.2258% ( 11) 00:07:26.124 11342.769 - 11393.182: 98.2943% ( 11) 00:07:26.124 11393.182 - 11443.594: 98.3503% ( 9) 00:07:26.124 11443.594 - 11494.006: 98.3690% ( 3) 00:07:26.124 11494.006 - 11544.418: 98.3939% ( 4) 00:07:26.124 11544.418 - 11594.831: 98.4064% ( 2) 00:07:26.124 11695.655 - 11746.068: 98.4313% ( 4) 00:07:26.124 11746.068 - 11796.480: 98.4500% ( 3) 00:07:26.124 11796.480 - 11846.892: 98.4749% ( 4) 00:07:26.124 11846.892 - 11897.305: 98.5122% ( 6) 00:07:26.124 11897.305 - 11947.717: 98.5745% ( 10) 00:07:26.124 11947.717 - 11998.129: 98.6056% ( 5) 00:07:26.124 11998.129 - 12048.542: 98.6492% ( 7) 00:07:26.124 12048.542 - 12098.954: 98.6990% ( 8) 00:07:26.124 12098.954 - 12149.366: 98.7425% ( 7) 00:07:26.124 12149.366 - 12199.778: 98.7799% ( 6) 00:07:26.124 12199.778 - 12250.191: 98.8297% ( 8) 00:07:26.124 12250.191 - 12300.603: 98.8733% ( 7) 00:07:26.124 12300.603 - 12351.015: 98.9168% ( 7) 00:07:26.124 12351.015 - 12401.428: 98.9604% ( 7) 00:07:26.124 12401.428 - 12451.840: 99.0040% ( 7) 00:07:26.124 12451.840 - 12502.252: 99.0476% ( 7) 00:07:26.124 12502.252 - 12552.665: 99.0911% ( 7) 00:07:26.124 12552.665 - 12603.077: 99.1347% ( 7) 00:07:26.124 12603.077 - 12653.489: 99.1658% ( 5) 00:07:26.124 12653.489 - 12703.902: 99.1845% ( 3) 00:07:26.124 12703.902 - 12754.314: 99.2032% ( 3) 00:07:26.124 32465.526 - 32667.175: 99.2156% ( 2) 00:07:26.124 32667.175 - 32868.825: 99.2654% ( 8) 00:07:26.124 32868.825 - 33070.474: 99.3215% ( 9) 00:07:26.124 33070.474 - 33272.123: 99.3713% ( 8) 00:07:26.124 33272.123 - 33473.772: 99.4273% ( 9) 00:07:26.124 33473.772 - 33675.422: 99.4833% ( 9) 00:07:26.124 33675.422 - 33877.071: 99.5331% ( 8) 00:07:26.124 33877.071 - 34078.720: 99.5891% ( 9) 00:07:26.124 34078.720 - 34280.369: 99.6016% ( 2) 00:07:26.124 37910.055 - 38111.705: 99.6078% ( 1) 00:07:26.124 38111.705 - 38313.354: 99.6576% ( 8) 00:07:26.124 38313.354 - 38515.003: 99.7074% ( 8) 00:07:26.124 38515.003 - 38716.652: 99.7634% ( 9) 00:07:26.124 38716.652 - 38918.302: 99.8195% ( 9) 00:07:26.124 38918.302 - 39119.951: 99.8693% ( 8) 00:07:26.124 39119.951 - 39321.600: 99.9191% ( 8) 00:07:26.124 39321.600 - 39523.249: 99.9689% ( 8) 00:07:26.124 39523.249 - 39724.898: 100.0000% ( 5) 00:07:26.124 00:07:26.124 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:26.124 ============================================================================== 00:07:26.124 Range in us Cumulative IO count 00:07:26.124 6377.157 - 6402.363: 0.0125% ( 2) 00:07:26.124 6402.363 - 6427.569: 0.0249% ( 2) 00:07:26.124 6427.569 - 6452.775: 0.0311% ( 1) 00:07:26.124 6452.775 - 6503.188: 0.0934% ( 10) 00:07:26.124 6503.188 - 6553.600: 0.1556% ( 10) 00:07:26.124 6553.600 - 6604.012: 0.1868% ( 5) 00:07:26.124 6604.012 - 6654.425: 0.2677% ( 13) 00:07:26.124 6654.425 - 6704.837: 0.4544% ( 30) 00:07:26.124 6704.837 - 6755.249: 0.6474% ( 31) 00:07:26.124 6755.249 - 6805.662: 0.8902% ( 39) 00:07:26.124 6805.662 - 6856.074: 1.2948% ( 65) 00:07:26.124 6856.074 - 6906.486: 1.8364% ( 87) 00:07:26.124 6906.486 - 6956.898: 2.6332% ( 128) 00:07:26.124 6956.898 - 7007.311: 3.6106% ( 157) 00:07:26.124 7007.311 - 7057.723: 4.9490% ( 215) 00:07:26.124 7057.723 - 7108.135: 6.4430% ( 240) 00:07:26.124 7108.135 - 7158.548: 8.6965% ( 362) 00:07:26.124 7158.548 - 7208.960: 11.6098% ( 468) 00:07:26.124 7208.960 - 7259.372: 14.9527% ( 537) 00:07:26.124 7259.372 - 7309.785: 18.7064% ( 603) 00:07:26.124 7309.785 - 7360.197: 23.0453% ( 697) 00:07:26.124 7360.197 - 7410.609: 27.4838% ( 713) 00:07:26.124 7410.609 - 7461.022: 32.1838% ( 755) 00:07:26.124 7461.022 - 7511.434: 37.0954% ( 789) 00:07:26.124 7511.434 - 7561.846: 42.0070% ( 789) 00:07:26.124 7561.846 - 7612.258: 46.9497% ( 794) 00:07:26.124 7612.258 - 7662.671: 51.7617% ( 773) 00:07:26.124 7662.671 - 7713.083: 56.5364% ( 767) 00:07:26.124 7713.083 - 7763.495: 61.2488% ( 757) 00:07:26.124 7763.495 - 7813.908: 65.8553% ( 740) 00:07:26.124 7813.908 - 7864.320: 70.3312% ( 719) 00:07:26.124 7864.320 - 7914.732: 74.5518% ( 678) 00:07:26.124 7914.732 - 7965.145: 78.4861% ( 632) 00:07:26.124 7965.145 - 8015.557: 81.7916% ( 531) 00:07:26.124 8015.557 - 8065.969: 84.5369% ( 441) 00:07:26.124 8065.969 - 8116.382: 86.6970% ( 347) 00:07:26.124 8116.382 - 8166.794: 88.3466% ( 265) 00:07:26.124 8166.794 - 8217.206: 89.5481% ( 193) 00:07:26.124 8217.206 - 8267.618: 90.5627% ( 163) 00:07:26.124 8267.618 - 8318.031: 91.4031% ( 135) 00:07:26.124 8318.031 - 8368.443: 92.0505% ( 104) 00:07:26.124 8368.443 - 8418.855: 92.5548% ( 81) 00:07:26.124 8418.855 - 8469.268: 92.9345% ( 61) 00:07:26.124 8469.268 - 8519.680: 93.3205% ( 62) 00:07:26.124 8519.680 - 8570.092: 93.6940% ( 60) 00:07:26.124 8570.092 - 8620.505: 94.0239% ( 53) 00:07:26.124 8620.505 - 8670.917: 94.3165% ( 47) 00:07:26.124 8670.917 - 8721.329: 94.5344% ( 35) 00:07:26.124 8721.329 - 8771.742: 94.6713% ( 22) 00:07:26.124 8771.742 - 8822.154: 94.7585% ( 14) 00:07:26.124 8822.154 - 8872.566: 94.8269% ( 11) 00:07:26.124 8872.566 - 8922.978: 94.8892% ( 10) 00:07:26.124 8922.978 - 8973.391: 94.9328% ( 7) 00:07:26.124 8973.391 - 9023.803: 94.9763% ( 7) 00:07:26.124 9023.803 - 9074.215: 95.0261% ( 8) 00:07:26.124 9074.215 - 9124.628: 95.0822% ( 9) 00:07:26.124 9124.628 - 9175.040: 95.1631% ( 13) 00:07:26.124 9175.040 - 9225.452: 95.2316% ( 11) 00:07:26.124 9225.452 - 9275.865: 95.2938% ( 10) 00:07:26.124 9275.865 - 9326.277: 95.3685% ( 12) 00:07:26.124 9326.277 - 9376.689: 95.4557% ( 14) 00:07:26.124 9376.689 - 9427.102: 95.5491% ( 15) 00:07:26.124 9427.102 - 9477.514: 95.6175% ( 11) 00:07:26.124 9477.514 - 9527.926: 95.6922% ( 12) 00:07:26.124 9527.926 - 9578.338: 95.7607% ( 11) 00:07:26.124 9578.338 - 9628.751: 95.8292% ( 11) 00:07:26.124 9628.751 - 9679.163: 95.9101% ( 13) 00:07:26.124 9679.163 - 9729.575: 95.9786% ( 11) 00:07:26.124 9729.575 - 9779.988: 96.0408% ( 10) 00:07:26.124 9779.988 - 9830.400: 96.1031% ( 10) 00:07:26.124 9830.400 - 9880.812: 96.1653% ( 10) 00:07:26.124 9880.812 - 9931.225: 96.2338% ( 11) 00:07:26.124 9931.225 - 9981.637: 96.2898% ( 9) 00:07:26.124 9981.637 - 10032.049: 96.3459% ( 9) 00:07:26.124 10032.049 - 10082.462: 96.4143% ( 11) 00:07:26.124 10082.462 - 10132.874: 96.5015% ( 14) 00:07:26.124 10132.874 - 10183.286: 96.5637% ( 10) 00:07:26.124 10183.286 - 10233.698: 96.6509% ( 14) 00:07:26.124 10233.698 - 10284.111: 96.7194% ( 11) 00:07:26.124 10284.111 - 10334.523: 96.7754% ( 9) 00:07:26.124 10334.523 - 10384.935: 96.8376% ( 10) 00:07:26.124 10384.935 - 10435.348: 96.8937% ( 9) 00:07:26.124 10435.348 - 10485.760: 96.9497% ( 9) 00:07:26.124 10485.760 - 10536.172: 97.0120% ( 10) 00:07:26.124 10536.172 - 10586.585: 97.0742% ( 10) 00:07:26.124 10586.585 - 10636.997: 97.1302% ( 9) 00:07:26.124 10636.997 - 10687.409: 97.1800% ( 8) 00:07:26.124 10687.409 - 10737.822: 97.2423% ( 10) 00:07:26.125 10737.822 - 10788.234: 97.3045% ( 10) 00:07:26.125 10788.234 - 10838.646: 97.3543% ( 8) 00:07:26.125 10838.646 - 10889.058: 97.4041% ( 8) 00:07:26.125 10889.058 - 10939.471: 97.4353% ( 5) 00:07:26.125 10939.471 - 10989.883: 97.4726% ( 6) 00:07:26.125 10989.883 - 11040.295: 97.5224% ( 8) 00:07:26.125 11040.295 - 11090.708: 97.5660% ( 7) 00:07:26.125 11090.708 - 11141.120: 97.6158% ( 8) 00:07:26.125 11141.120 - 11191.532: 97.6967% ( 13) 00:07:26.125 11191.532 - 11241.945: 97.7714% ( 12) 00:07:26.125 11241.945 - 11292.357: 97.8835% ( 18) 00:07:26.125 11292.357 - 11342.769: 97.9519% ( 11) 00:07:26.125 11342.769 - 11393.182: 98.0142% ( 10) 00:07:26.125 11393.182 - 11443.594: 98.0827% ( 11) 00:07:26.125 11443.594 - 11494.006: 98.1511% ( 11) 00:07:26.125 11494.006 - 11544.418: 98.2196% ( 11) 00:07:26.125 11544.418 - 11594.831: 98.3068% ( 14) 00:07:26.125 11594.831 - 11645.243: 98.3877% ( 13) 00:07:26.125 11645.243 - 11695.655: 98.4624% ( 12) 00:07:26.125 11695.655 - 11746.068: 98.5496% ( 14) 00:07:26.125 11746.068 - 11796.480: 98.6243% ( 12) 00:07:26.125 11796.480 - 11846.892: 98.7052% ( 13) 00:07:26.125 11846.892 - 11897.305: 98.7612% ( 9) 00:07:26.125 11897.305 - 11947.717: 98.8172% ( 9) 00:07:26.125 11947.717 - 11998.129: 98.8857% ( 11) 00:07:26.125 11998.129 - 12048.542: 98.9355% ( 8) 00:07:26.125 12048.542 - 12098.954: 98.9542% ( 3) 00:07:26.125 12098.954 - 12149.366: 98.9791% ( 4) 00:07:26.125 12149.366 - 12199.778: 98.9978% ( 3) 00:07:26.125 12199.778 - 12250.191: 99.0227% ( 4) 00:07:26.125 12250.191 - 12300.603: 99.0476% ( 4) 00:07:26.125 12300.603 - 12351.015: 99.0725% ( 4) 00:07:26.125 12351.015 - 12401.428: 99.0974% ( 4) 00:07:26.125 12401.428 - 12451.840: 99.1223% ( 4) 00:07:26.125 12451.840 - 12502.252: 99.1472% ( 4) 00:07:26.125 12502.252 - 12552.665: 99.1721% ( 4) 00:07:26.125 12552.665 - 12603.077: 99.1970% ( 4) 00:07:26.125 12603.077 - 12653.489: 99.2032% ( 1) 00:07:26.125 31457.280 - 31658.929: 99.2156% ( 2) 00:07:26.125 31658.929 - 31860.578: 99.2468% ( 5) 00:07:26.125 31860.578 - 32062.228: 99.2779% ( 5) 00:07:26.125 32062.228 - 32263.877: 99.3090% ( 5) 00:07:26.125 32263.877 - 32465.526: 99.3401% ( 5) 00:07:26.125 32465.526 - 32667.175: 99.3775% ( 6) 00:07:26.125 32667.175 - 32868.825: 99.4273% ( 8) 00:07:26.125 32868.825 - 33070.474: 99.4833% ( 9) 00:07:26.125 33070.474 - 33272.123: 99.5393% ( 9) 00:07:26.125 33272.123 - 33473.772: 99.5891% ( 8) 00:07:26.125 33473.772 - 33675.422: 99.6016% ( 2) 00:07:26.125 36700.160 - 36901.809: 99.6140% ( 2) 00:07:26.125 36901.809 - 37103.458: 99.6638% ( 8) 00:07:26.125 37103.458 - 37305.108: 99.7074% ( 7) 00:07:26.125 37305.108 - 37506.757: 99.7572% ( 8) 00:07:26.125 37506.757 - 37708.406: 99.8132% ( 9) 00:07:26.125 37708.406 - 37910.055: 99.8630% ( 8) 00:07:26.125 37910.055 - 38111.705: 99.9191% ( 9) 00:07:26.125 38111.705 - 38313.354: 99.9689% ( 8) 00:07:26.125 38313.354 - 38515.003: 100.0000% ( 5) 00:07:26.125 00:07:26.125 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:26.125 ============================================================================== 00:07:26.125 Range in us Cumulative IO count 00:07:26.125 6452.775 - 6503.188: 0.0249% ( 4) 00:07:26.125 6503.188 - 6553.600: 0.0747% ( 8) 00:07:26.125 6553.600 - 6604.012: 0.1121% ( 6) 00:07:26.125 6604.012 - 6654.425: 0.2303% ( 19) 00:07:26.125 6654.425 - 6704.837: 0.3424% ( 18) 00:07:26.125 6704.837 - 6755.249: 0.4607% ( 19) 00:07:26.125 6755.249 - 6805.662: 0.6910% ( 37) 00:07:26.125 6805.662 - 6856.074: 1.0520% ( 58) 00:07:26.125 6856.074 - 6906.486: 1.4878% ( 70) 00:07:26.125 6906.486 - 6956.898: 2.1726% ( 110) 00:07:26.125 6956.898 - 7007.311: 3.1624% ( 159) 00:07:26.125 7007.311 - 7057.723: 4.5070% ( 216) 00:07:26.125 7057.723 - 7108.135: 6.1255% ( 260) 00:07:26.125 7108.135 - 7158.548: 8.2109% ( 335) 00:07:26.125 7158.548 - 7208.960: 11.1927% ( 479) 00:07:26.125 7208.960 - 7259.372: 14.2181% ( 486) 00:07:26.125 7259.372 - 7309.785: 18.0777% ( 620) 00:07:26.125 7309.785 - 7360.197: 22.4104% ( 696) 00:07:26.125 7360.197 - 7410.609: 26.9111% ( 723) 00:07:26.125 7410.609 - 7461.022: 31.7542% ( 778) 00:07:26.125 7461.022 - 7511.434: 36.6721% ( 790) 00:07:26.125 7511.434 - 7561.846: 41.7393% ( 814) 00:07:26.125 7561.846 - 7612.258: 46.7318% ( 802) 00:07:26.125 7612.258 - 7662.671: 51.7306% ( 803) 00:07:26.125 7662.671 - 7713.083: 56.5239% ( 770) 00:07:26.125 7713.083 - 7763.495: 61.4293% ( 788) 00:07:26.125 7763.495 - 7813.908: 65.9861% ( 732) 00:07:26.125 7813.908 - 7864.320: 70.5117% ( 727) 00:07:26.125 7864.320 - 7914.732: 74.7510% ( 681) 00:07:26.125 7914.732 - 7965.145: 78.5732% ( 614) 00:07:26.125 7965.145 - 8015.557: 81.9099% ( 536) 00:07:26.125 8015.557 - 8065.969: 84.5618% ( 426) 00:07:26.125 8065.969 - 8116.382: 86.7966% ( 359) 00:07:26.125 8116.382 - 8166.794: 88.4898% ( 272) 00:07:26.125 8166.794 - 8217.206: 89.7846% ( 208) 00:07:26.125 8217.206 - 8267.618: 90.7433% ( 154) 00:07:26.125 8267.618 - 8318.031: 91.5027% ( 122) 00:07:26.125 8318.031 - 8368.443: 92.0194% ( 83) 00:07:26.125 8368.443 - 8418.855: 92.4863% ( 75) 00:07:26.125 8418.855 - 8469.268: 92.8972% ( 66) 00:07:26.125 8469.268 - 8519.680: 93.2831% ( 62) 00:07:26.125 8519.680 - 8570.092: 93.6006% ( 51) 00:07:26.125 8570.092 - 8620.505: 93.8745% ( 44) 00:07:26.125 8620.505 - 8670.917: 94.0737% ( 32) 00:07:26.125 8670.917 - 8721.329: 94.2418% ( 27) 00:07:26.125 8721.329 - 8771.742: 94.3725% ( 21) 00:07:26.125 8771.742 - 8822.154: 94.4285% ( 9) 00:07:26.125 8822.154 - 8872.566: 94.5095% ( 13) 00:07:26.125 8872.566 - 8922.978: 94.5966% ( 14) 00:07:26.125 8922.978 - 8973.391: 94.6775% ( 13) 00:07:26.125 8973.391 - 9023.803: 94.7647% ( 14) 00:07:26.125 9023.803 - 9074.215: 94.9016% ( 22) 00:07:26.125 9074.215 - 9124.628: 94.9701% ( 11) 00:07:26.125 9124.628 - 9175.040: 95.0261% ( 9) 00:07:26.125 9175.040 - 9225.452: 95.1133% ( 14) 00:07:26.125 9225.452 - 9275.865: 95.2129% ( 16) 00:07:26.125 9275.865 - 9326.277: 95.2814% ( 11) 00:07:26.125 9326.277 - 9376.689: 95.3623% ( 13) 00:07:26.125 9376.689 - 9427.102: 95.4432% ( 13) 00:07:26.125 9427.102 - 9477.514: 95.5179% ( 12) 00:07:26.125 9477.514 - 9527.926: 95.5802% ( 10) 00:07:26.125 9527.926 - 9578.338: 95.6736% ( 15) 00:07:26.125 9578.338 - 9628.751: 95.7732% ( 16) 00:07:26.125 9628.751 - 9679.163: 95.8603% ( 14) 00:07:26.125 9679.163 - 9729.575: 95.9599% ( 16) 00:07:26.125 9729.575 - 9779.988: 96.0471% ( 14) 00:07:26.125 9779.988 - 9830.400: 96.1404% ( 15) 00:07:26.125 9830.400 - 9880.812: 96.2338% ( 15) 00:07:26.125 9880.812 - 9931.225: 96.3272% ( 15) 00:07:26.125 9931.225 - 9981.637: 96.4206% ( 15) 00:07:26.125 9981.637 - 10032.049: 96.5139% ( 15) 00:07:26.125 10032.049 - 10082.462: 96.5824% ( 11) 00:07:26.125 10082.462 - 10132.874: 96.6571% ( 12) 00:07:26.125 10132.874 - 10183.286: 96.7256% ( 11) 00:07:26.125 10183.286 - 10233.698: 96.7941% ( 11) 00:07:26.125 10233.698 - 10284.111: 96.8563% ( 10) 00:07:26.125 10284.111 - 10334.523: 96.9310% ( 12) 00:07:26.125 10334.523 - 10384.935: 96.9995% ( 11) 00:07:26.125 10384.935 - 10435.348: 97.0929% ( 15) 00:07:26.125 10435.348 - 10485.760: 97.1489% ( 9) 00:07:26.125 10485.760 - 10536.172: 97.2049% ( 9) 00:07:26.125 10536.172 - 10586.585: 97.2672% ( 10) 00:07:26.125 10586.585 - 10636.997: 97.3606% ( 15) 00:07:26.125 10636.997 - 10687.409: 97.4477% ( 14) 00:07:26.125 10687.409 - 10737.822: 97.5286% ( 13) 00:07:26.125 10737.822 - 10788.234: 97.6033% ( 12) 00:07:26.125 10788.234 - 10838.646: 97.6594% ( 9) 00:07:26.125 10838.646 - 10889.058: 97.7278% ( 11) 00:07:26.125 10889.058 - 10939.471: 97.7901% ( 10) 00:07:26.125 10939.471 - 10989.883: 97.8648% ( 12) 00:07:26.125 10989.883 - 11040.295: 97.9333% ( 11) 00:07:26.125 11040.295 - 11090.708: 97.9955% ( 10) 00:07:26.125 11090.708 - 11141.120: 98.0640% ( 11) 00:07:26.125 11141.120 - 11191.532: 98.1387% ( 12) 00:07:26.125 11191.532 - 11241.945: 98.1823% ( 7) 00:07:26.125 11241.945 - 11292.357: 98.2321% ( 8) 00:07:26.125 11292.357 - 11342.769: 98.2756% ( 7) 00:07:26.125 11342.769 - 11393.182: 98.3130% ( 6) 00:07:26.125 11393.182 - 11443.594: 98.3628% ( 8) 00:07:26.125 11443.594 - 11494.006: 98.3939% ( 5) 00:07:26.125 11494.006 - 11544.418: 98.4064% ( 2) 00:07:26.125 11645.243 - 11695.655: 98.4500% ( 7) 00:07:26.125 11695.655 - 11746.068: 98.5247% ( 12) 00:07:26.125 11746.068 - 11796.480: 98.5558% ( 5) 00:07:26.125 11796.480 - 11846.892: 98.5931% ( 6) 00:07:26.125 11846.892 - 11897.305: 98.6492% ( 9) 00:07:26.125 11897.305 - 11947.717: 98.6990% ( 8) 00:07:26.125 11947.717 - 11998.129: 98.7425% ( 7) 00:07:26.125 11998.129 - 12048.542: 98.7861% ( 7) 00:07:26.125 12048.542 - 12098.954: 98.8297% ( 7) 00:07:26.125 12098.954 - 12149.366: 98.8733% ( 7) 00:07:26.125 12149.366 - 12199.778: 98.9231% ( 8) 00:07:26.125 12199.778 - 12250.191: 98.9729% ( 8) 00:07:26.125 12250.191 - 12300.603: 99.0102% ( 6) 00:07:26.125 12300.603 - 12351.015: 99.0600% ( 8) 00:07:26.125 12351.015 - 12401.428: 99.1098% ( 8) 00:07:26.125 12401.428 - 12451.840: 99.1534% ( 7) 00:07:26.125 12451.840 - 12502.252: 99.1970% ( 7) 00:07:26.125 12502.252 - 12552.665: 99.2032% ( 1) 00:07:26.125 30045.735 - 30247.385: 99.2281% ( 4) 00:07:26.125 30247.385 - 30449.034: 99.2841% ( 9) 00:07:26.125 30449.034 - 30650.683: 99.3339% ( 8) 00:07:26.125 30650.683 - 30852.332: 99.3837% ( 8) 00:07:26.125 30852.332 - 31053.982: 99.4335% ( 8) 00:07:26.125 31053.982 - 31255.631: 99.4895% ( 9) 00:07:26.125 31255.631 - 31457.280: 99.5393% ( 8) 00:07:26.126 31457.280 - 31658.929: 99.5891% ( 8) 00:07:26.126 31658.929 - 31860.578: 99.6016% ( 2) 00:07:26.126 35086.966 - 35288.615: 99.6327% ( 5) 00:07:26.126 35288.615 - 35490.265: 99.6887% ( 9) 00:07:26.126 35490.265 - 35691.914: 99.7385% ( 8) 00:07:26.126 35691.914 - 35893.563: 99.7883% ( 8) 00:07:26.126 35893.563 - 36095.212: 99.8381% ( 8) 00:07:26.126 36095.212 - 36296.862: 99.8942% ( 9) 00:07:26.126 36296.862 - 36498.511: 99.9440% ( 8) 00:07:26.126 36498.511 - 36700.160: 100.0000% ( 9) 00:07:26.126 00:07:26.126 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:26.126 ============================================================================== 00:07:26.126 Range in us Cumulative IO count 00:07:26.126 6503.188 - 6553.600: 0.0249% ( 4) 00:07:26.126 6553.600 - 6604.012: 0.0809% ( 9) 00:07:26.126 6604.012 - 6654.425: 0.1992% ( 19) 00:07:26.126 6654.425 - 6704.837: 0.3362% ( 22) 00:07:26.126 6704.837 - 6755.249: 0.4544% ( 19) 00:07:26.126 6755.249 - 6805.662: 0.6723% ( 35) 00:07:26.126 6805.662 - 6856.074: 1.0707% ( 64) 00:07:26.126 6856.074 - 6906.486: 1.5625% ( 79) 00:07:26.126 6906.486 - 6956.898: 2.2348% ( 108) 00:07:26.126 6956.898 - 7007.311: 3.1997% ( 155) 00:07:26.126 7007.311 - 7057.723: 4.5443% ( 216) 00:07:26.126 7057.723 - 7108.135: 6.1877% ( 264) 00:07:26.126 7108.135 - 7158.548: 8.2171% ( 326) 00:07:26.126 7158.548 - 7208.960: 10.8939% ( 430) 00:07:26.126 7208.960 - 7259.372: 14.2617% ( 541) 00:07:26.126 7259.372 - 7309.785: 18.2084% ( 634) 00:07:26.126 7309.785 - 7360.197: 22.7403% ( 728) 00:07:26.126 7360.197 - 7410.609: 27.3469% ( 740) 00:07:26.126 7410.609 - 7461.022: 31.9659% ( 742) 00:07:26.126 7461.022 - 7511.434: 36.8713% ( 788) 00:07:26.126 7511.434 - 7561.846: 41.9634% ( 818) 00:07:26.126 7561.846 - 7612.258: 46.9435% ( 800) 00:07:26.126 7612.258 - 7662.671: 51.9298% ( 801) 00:07:26.126 7662.671 - 7713.083: 56.8538% ( 791) 00:07:26.126 7713.083 - 7763.495: 61.6285% ( 767) 00:07:26.126 7763.495 - 7813.908: 66.2226% ( 738) 00:07:26.126 7813.908 - 7864.320: 70.6300% ( 708) 00:07:26.126 7864.320 - 7914.732: 74.8195% ( 673) 00:07:26.126 7914.732 - 7965.145: 78.6043% ( 608) 00:07:26.126 7965.145 - 8015.557: 81.9659% ( 540) 00:07:26.126 8015.557 - 8065.969: 84.6614% ( 433) 00:07:26.126 8065.969 - 8116.382: 86.7779% ( 340) 00:07:26.126 8116.382 - 8166.794: 88.4026% ( 261) 00:07:26.126 8166.794 - 8217.206: 89.6228% ( 196) 00:07:26.126 8217.206 - 8267.618: 90.6188% ( 160) 00:07:26.126 8267.618 - 8318.031: 91.3284% ( 114) 00:07:26.126 8318.031 - 8368.443: 91.8513% ( 84) 00:07:26.126 8368.443 - 8418.855: 92.3556% ( 81) 00:07:26.126 8418.855 - 8469.268: 92.7415% ( 62) 00:07:26.126 8469.268 - 8519.680: 93.0403% ( 48) 00:07:26.126 8519.680 - 8570.092: 93.2707% ( 37) 00:07:26.126 8570.092 - 8620.505: 93.4761% ( 33) 00:07:26.126 8620.505 - 8670.917: 93.6504% ( 28) 00:07:26.126 8670.917 - 8721.329: 93.7811% ( 21) 00:07:26.126 8721.329 - 8771.742: 93.9305% ( 24) 00:07:26.126 8771.742 - 8822.154: 94.0426% ( 18) 00:07:26.126 8822.154 - 8872.566: 94.1360% ( 15) 00:07:26.126 8872.566 - 8922.978: 94.2231% ( 14) 00:07:26.126 8922.978 - 8973.391: 94.2854% ( 10) 00:07:26.126 8973.391 - 9023.803: 94.3476% ( 10) 00:07:26.126 9023.803 - 9074.215: 94.4161% ( 11) 00:07:26.126 9074.215 - 9124.628: 94.5157% ( 16) 00:07:26.126 9124.628 - 9175.040: 94.5966% ( 13) 00:07:26.126 9175.040 - 9225.452: 94.6962% ( 16) 00:07:26.126 9225.452 - 9275.865: 94.8083% ( 18) 00:07:26.126 9275.865 - 9326.277: 94.9328% ( 20) 00:07:26.126 9326.277 - 9376.689: 95.0573% ( 20) 00:07:26.126 9376.689 - 9427.102: 95.2253% ( 27) 00:07:26.126 9427.102 - 9477.514: 95.3810% ( 25) 00:07:26.126 9477.514 - 9527.926: 95.5179% ( 22) 00:07:26.126 9527.926 - 9578.338: 95.6549% ( 22) 00:07:26.126 9578.338 - 9628.751: 95.7732% ( 19) 00:07:26.126 9628.751 - 9679.163: 95.8914% ( 19) 00:07:26.126 9679.163 - 9729.575: 96.0035% ( 18) 00:07:26.126 9729.575 - 9779.988: 96.1218% ( 19) 00:07:26.126 9779.988 - 9830.400: 96.2276% ( 17) 00:07:26.126 9830.400 - 9880.812: 96.3459% ( 19) 00:07:26.126 9880.812 - 9931.225: 96.4641% ( 19) 00:07:26.126 9931.225 - 9981.637: 96.5326% ( 11) 00:07:26.126 9981.637 - 10032.049: 96.6260% ( 15) 00:07:26.126 10032.049 - 10082.462: 96.7443% ( 19) 00:07:26.126 10082.462 - 10132.874: 96.8439% ( 16) 00:07:26.126 10132.874 - 10183.286: 96.9373% ( 15) 00:07:26.126 10183.286 - 10233.698: 97.0306% ( 15) 00:07:26.126 10233.698 - 10284.111: 97.1427% ( 18) 00:07:26.126 10284.111 - 10334.523: 97.1925% ( 8) 00:07:26.126 10334.523 - 10384.935: 97.2547% ( 10) 00:07:26.126 10384.935 - 10435.348: 97.3357% ( 13) 00:07:26.126 10435.348 - 10485.760: 97.4166% ( 13) 00:07:26.126 10485.760 - 10536.172: 97.5100% ( 15) 00:07:26.126 10536.172 - 10586.585: 97.6096% ( 16) 00:07:26.126 10586.585 - 10636.997: 97.6967% ( 14) 00:07:26.126 10636.997 - 10687.409: 97.7901% ( 15) 00:07:26.126 10687.409 - 10737.822: 97.8897% ( 16) 00:07:26.126 10737.822 - 10788.234: 97.9644% ( 12) 00:07:26.126 10788.234 - 10838.646: 98.0515% ( 14) 00:07:26.126 10838.646 - 10889.058: 98.1200% ( 11) 00:07:26.126 10889.058 - 10939.471: 98.1885% ( 11) 00:07:26.126 10939.471 - 10989.883: 98.2321% ( 7) 00:07:26.126 10989.883 - 11040.295: 98.2819% ( 8) 00:07:26.126 11040.295 - 11090.708: 98.3192% ( 6) 00:07:26.126 11090.708 - 11141.120: 98.3441% ( 4) 00:07:26.126 11141.120 - 11191.532: 98.3628% ( 3) 00:07:26.126 11191.532 - 11241.945: 98.3877% ( 4) 00:07:26.126 11241.945 - 11292.357: 98.4064% ( 3) 00:07:26.126 11746.068 - 11796.480: 98.4437% ( 6) 00:07:26.126 11796.480 - 11846.892: 98.4624% ( 3) 00:07:26.126 11846.892 - 11897.305: 98.4873% ( 4) 00:07:26.126 11897.305 - 11947.717: 98.5122% ( 4) 00:07:26.126 11947.717 - 11998.129: 98.5371% ( 4) 00:07:26.126 11998.129 - 12048.542: 98.5558% ( 3) 00:07:26.126 12048.542 - 12098.954: 98.5807% ( 4) 00:07:26.126 12098.954 - 12149.366: 98.6118% ( 5) 00:07:26.126 12149.366 - 12199.778: 98.6554% ( 7) 00:07:26.126 12199.778 - 12250.191: 98.7052% ( 8) 00:07:26.126 12250.191 - 12300.603: 98.7550% ( 8) 00:07:26.126 12300.603 - 12351.015: 98.7923% ( 6) 00:07:26.126 12351.015 - 12401.428: 98.8546% ( 10) 00:07:26.126 12401.428 - 12451.840: 98.8857% ( 5) 00:07:26.126 12451.840 - 12502.252: 98.9231% ( 6) 00:07:26.126 12502.252 - 12552.665: 98.9729% ( 8) 00:07:26.126 12552.665 - 12603.077: 99.0164% ( 7) 00:07:26.126 12603.077 - 12653.489: 99.0476% ( 5) 00:07:26.126 12653.489 - 12703.902: 99.0725% ( 4) 00:07:26.126 12703.902 - 12754.314: 99.0974% ( 4) 00:07:26.126 12754.314 - 12804.726: 99.1160% ( 3) 00:07:26.126 12804.726 - 12855.138: 99.1409% ( 4) 00:07:26.126 12855.138 - 12905.551: 99.1658% ( 4) 00:07:26.126 12905.551 - 13006.375: 99.2032% ( 6) 00:07:26.126 28432.542 - 28634.191: 99.2405% ( 6) 00:07:26.126 28634.191 - 28835.840: 99.2903% ( 8) 00:07:26.126 28835.840 - 29037.489: 99.3401% ( 8) 00:07:26.126 29037.489 - 29239.138: 99.3962% ( 9) 00:07:26.126 29239.138 - 29440.788: 99.4460% ( 8) 00:07:26.126 29440.788 - 29642.437: 99.4958% ( 8) 00:07:26.126 29642.437 - 29844.086: 99.5518% ( 9) 00:07:26.126 29844.086 - 30045.735: 99.6016% ( 8) 00:07:26.126 33473.772 - 33675.422: 99.6514% ( 8) 00:07:26.126 33675.422 - 33877.071: 99.7012% ( 8) 00:07:26.126 33877.071 - 34078.720: 99.7510% ( 8) 00:07:26.126 34078.720 - 34280.369: 99.8008% ( 8) 00:07:26.126 34280.369 - 34482.018: 99.8506% ( 8) 00:07:26.126 34482.018 - 34683.668: 99.9004% ( 8) 00:07:26.126 34683.668 - 34885.317: 99.9564% ( 9) 00:07:26.126 34885.317 - 35086.966: 100.0000% ( 7) 00:07:26.126 00:07:26.126 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:26.126 ============================================================================== 00:07:26.126 Range in us Cumulative IO count 00:07:26.126 6503.188 - 6553.600: 0.0124% ( 2) 00:07:26.126 6553.600 - 6604.012: 0.0434% ( 5) 00:07:26.126 6604.012 - 6654.425: 0.1798% ( 22) 00:07:26.126 6654.425 - 6704.837: 0.3658% ( 30) 00:07:26.126 6704.837 - 6755.249: 0.5766% ( 34) 00:07:26.126 6755.249 - 6805.662: 0.8805% ( 49) 00:07:26.126 6805.662 - 6856.074: 1.2153% ( 54) 00:07:26.126 6856.074 - 6906.486: 1.7175% ( 81) 00:07:26.126 6906.486 - 6956.898: 2.4492% ( 118) 00:07:26.126 6956.898 - 7007.311: 3.2552% ( 130) 00:07:26.126 7007.311 - 7057.723: 4.4581% ( 194) 00:07:26.126 7057.723 - 7108.135: 6.0640% ( 259) 00:07:26.126 7108.135 - 7158.548: 8.1783% ( 341) 00:07:26.126 7158.548 - 7208.960: 10.8197% ( 426) 00:07:26.126 7208.960 - 7259.372: 14.0997% ( 529) 00:07:26.126 7259.372 - 7309.785: 18.0804% ( 642) 00:07:26.126 7309.785 - 7360.197: 22.4144% ( 699) 00:07:26.126 7360.197 - 7410.609: 26.9903% ( 738) 00:07:26.126 7410.609 - 7461.022: 31.6964% ( 759) 00:07:26.126 7461.022 - 7511.434: 36.6195% ( 794) 00:07:26.127 7511.434 - 7561.846: 41.5179% ( 790) 00:07:26.127 7561.846 - 7612.258: 46.4720% ( 799) 00:07:26.127 7612.258 - 7662.671: 51.3455% ( 786) 00:07:26.127 7662.671 - 7713.083: 56.2686% ( 794) 00:07:26.127 7713.083 - 7763.495: 61.0615% ( 773) 00:07:26.127 7763.495 - 7813.908: 65.7490% ( 756) 00:07:26.127 7813.908 - 7864.320: 70.2381% ( 724) 00:07:26.127 7864.320 - 7914.732: 74.3986% ( 671) 00:07:26.127 7914.732 - 7965.145: 78.2490% ( 621) 00:07:26.127 7965.145 - 8015.557: 81.6344% ( 546) 00:07:26.127 8015.557 - 8065.969: 84.3626% ( 440) 00:07:26.127 8065.969 - 8116.382: 86.4645% ( 339) 00:07:26.127 8116.382 - 8166.794: 87.9464% ( 239) 00:07:26.127 8166.794 - 8217.206: 89.1617% ( 196) 00:07:26.127 8217.206 - 8267.618: 90.0112% ( 137) 00:07:26.127 8267.618 - 8318.031: 90.7056% ( 112) 00:07:26.127 8318.031 - 8368.443: 91.2636% ( 90) 00:07:26.127 8368.443 - 8418.855: 91.7101% ( 72) 00:07:26.127 8418.855 - 8469.268: 92.0821% ( 60) 00:07:26.127 8469.268 - 8519.680: 92.4479% ( 59) 00:07:26.127 8519.680 - 8570.092: 92.7393% ( 47) 00:07:26.127 8570.092 - 8620.505: 93.0060% ( 43) 00:07:26.127 8620.505 - 8670.917: 93.2044% ( 32) 00:07:26.127 8670.917 - 8721.329: 93.3780% ( 28) 00:07:26.127 8721.329 - 8771.742: 93.5144% ( 22) 00:07:26.127 8771.742 - 8822.154: 93.6198% ( 17) 00:07:26.127 8822.154 - 8872.566: 93.7314% ( 18) 00:07:26.127 8872.566 - 8922.978: 93.8306% ( 16) 00:07:26.127 8922.978 - 8973.391: 93.9112% ( 13) 00:07:26.127 8973.391 - 9023.803: 94.0166% ( 17) 00:07:26.127 9023.803 - 9074.215: 94.1468% ( 21) 00:07:26.127 9074.215 - 9124.628: 94.2708% ( 20) 00:07:26.127 9124.628 - 9175.040: 94.3948% ( 20) 00:07:26.127 9175.040 - 9225.452: 94.5188% ( 20) 00:07:26.127 9225.452 - 9275.865: 94.6367% ( 19) 00:07:26.127 9275.865 - 9326.277: 94.7545% ( 19) 00:07:26.127 9326.277 - 9376.689: 94.8785% ( 20) 00:07:26.127 9376.689 - 9427.102: 95.0087% ( 21) 00:07:26.127 9427.102 - 9477.514: 95.1513% ( 23) 00:07:26.127 9477.514 - 9527.926: 95.2753% ( 20) 00:07:26.127 9527.926 - 9578.338: 95.3869% ( 18) 00:07:26.127 9578.338 - 9628.751: 95.4923% ( 17) 00:07:26.127 9628.751 - 9679.163: 95.6411% ( 24) 00:07:26.127 9679.163 - 9729.575: 95.7899% ( 24) 00:07:26.127 9729.575 - 9779.988: 95.9449% ( 25) 00:07:26.127 9779.988 - 9830.400: 96.1000% ( 25) 00:07:26.127 9830.400 - 9880.812: 96.2736% ( 28) 00:07:26.127 9880.812 - 9931.225: 96.3976% ( 20) 00:07:26.127 9931.225 - 9981.637: 96.5340% ( 22) 00:07:26.127 9981.637 - 10032.049: 96.6890% ( 25) 00:07:26.127 10032.049 - 10082.462: 96.8316% ( 23) 00:07:26.127 10082.462 - 10132.874: 96.9618% ( 21) 00:07:26.127 10132.874 - 10183.286: 97.0858% ( 20) 00:07:26.127 10183.286 - 10233.698: 97.1974% ( 18) 00:07:26.127 10233.698 - 10284.111: 97.3152% ( 19) 00:07:26.127 10284.111 - 10334.523: 97.4206% ( 17) 00:07:26.127 10334.523 - 10384.935: 97.5074% ( 14) 00:07:26.127 10384.935 - 10435.348: 97.6004% ( 15) 00:07:26.127 10435.348 - 10485.760: 97.7121% ( 18) 00:07:26.127 10485.760 - 10536.172: 97.7927% ( 13) 00:07:26.127 10536.172 - 10586.585: 97.8609% ( 11) 00:07:26.127 10586.585 - 10636.997: 97.9291% ( 11) 00:07:26.127 10636.997 - 10687.409: 98.0035% ( 12) 00:07:26.127 10687.409 - 10737.822: 98.0655% ( 10) 00:07:26.127 10737.822 - 10788.234: 98.1399% ( 12) 00:07:26.127 10788.234 - 10838.646: 98.1895% ( 8) 00:07:26.127 10838.646 - 10889.058: 98.2143% ( 4) 00:07:26.127 10889.058 - 10939.471: 98.2329% ( 3) 00:07:26.127 10939.471 - 10989.883: 98.2577% ( 4) 00:07:26.127 10989.883 - 11040.295: 98.2825% ( 4) 00:07:26.127 11040.295 - 11090.708: 98.3073% ( 4) 00:07:26.127 11090.708 - 11141.120: 98.3259% ( 3) 00:07:26.127 11141.120 - 11191.532: 98.3507% ( 4) 00:07:26.127 11191.532 - 11241.945: 98.3755% ( 4) 00:07:26.127 11241.945 - 11292.357: 98.4003% ( 4) 00:07:26.127 11292.357 - 11342.769: 98.4127% ( 2) 00:07:26.127 11746.068 - 11796.480: 98.4375% ( 4) 00:07:26.127 11796.480 - 11846.892: 98.4623% ( 4) 00:07:26.127 11846.892 - 11897.305: 98.4809% ( 3) 00:07:26.127 11897.305 - 11947.717: 98.5057% ( 4) 00:07:26.127 11947.717 - 11998.129: 98.5243% ( 3) 00:07:26.127 11998.129 - 12048.542: 98.5491% ( 4) 00:07:26.127 12048.542 - 12098.954: 98.5677% ( 3) 00:07:26.127 12098.954 - 12149.366: 98.6049% ( 6) 00:07:26.127 12149.366 - 12199.778: 98.6607% ( 9) 00:07:26.127 12199.778 - 12250.191: 98.6979% ( 6) 00:07:26.127 12250.191 - 12300.603: 98.7475% ( 8) 00:07:26.127 12300.603 - 12351.015: 98.7971% ( 8) 00:07:26.127 12351.015 - 12401.428: 98.8343% ( 6) 00:07:26.127 12401.428 - 12451.840: 98.8777% ( 7) 00:07:26.127 12451.840 - 12502.252: 98.9211% ( 7) 00:07:26.127 12502.252 - 12552.665: 98.9645% ( 7) 00:07:26.127 12552.665 - 12603.077: 99.0079% ( 7) 00:07:26.127 12603.077 - 12653.489: 99.0513% ( 7) 00:07:26.127 12653.489 - 12703.902: 99.0699% ( 3) 00:07:26.127 12703.902 - 12754.314: 99.0947% ( 4) 00:07:26.127 12754.314 - 12804.726: 99.1195% ( 4) 00:07:26.127 12804.726 - 12855.138: 99.1381% ( 3) 00:07:26.127 12855.138 - 12905.551: 99.1629% ( 4) 00:07:26.127 12905.551 - 13006.375: 99.2063% ( 7) 00:07:26.127 21273.994 - 21374.818: 99.2312% ( 4) 00:07:26.127 21374.818 - 21475.643: 99.2560% ( 4) 00:07:26.127 21475.643 - 21576.468: 99.2808% ( 4) 00:07:26.127 21576.468 - 21677.292: 99.3118% ( 5) 00:07:26.127 21677.292 - 21778.117: 99.3366% ( 4) 00:07:26.127 21778.117 - 21878.942: 99.3614% ( 4) 00:07:26.127 21878.942 - 21979.766: 99.3924% ( 5) 00:07:26.127 21979.766 - 22080.591: 99.4172% ( 4) 00:07:26.127 22080.591 - 22181.415: 99.4420% ( 4) 00:07:26.127 22181.415 - 22282.240: 99.4668% ( 4) 00:07:26.127 22282.240 - 22383.065: 99.4978% ( 5) 00:07:26.127 22383.065 - 22483.889: 99.5226% ( 4) 00:07:26.127 22483.889 - 22584.714: 99.5474% ( 4) 00:07:26.127 22584.714 - 22685.538: 99.5722% ( 4) 00:07:26.127 22685.538 - 22786.363: 99.5970% ( 4) 00:07:26.127 22786.363 - 22887.188: 99.6032% ( 1) 00:07:26.127 27827.594 - 28029.243: 99.6218% ( 3) 00:07:26.127 28029.243 - 28230.892: 99.6776% ( 9) 00:07:26.127 28230.892 - 28432.542: 99.7210% ( 7) 00:07:26.127 28432.542 - 28634.191: 99.7706% ( 8) 00:07:26.127 28634.191 - 28835.840: 99.8264% ( 9) 00:07:26.127 28835.840 - 29037.489: 99.8760% ( 8) 00:07:26.127 29037.489 - 29239.138: 99.9256% ( 8) 00:07:26.127 29239.138 - 29440.788: 99.9814% ( 9) 00:07:26.127 29440.788 - 29642.437: 100.0000% ( 3) 00:07:26.127 00:07:26.127 13:20:14 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:27.074 Initializing NVMe Controllers 00:07:27.074 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:27.074 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:27.074 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:27.074 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:27.074 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:27.074 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:27.074 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:27.074 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:27.074 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:27.074 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:27.074 Initialization complete. Launching workers. 00:07:27.074 ======================================================== 00:07:27.074 Latency(us) 00:07:27.074 Device Information : IOPS MiB/s Average min max 00:07:27.074 PCIE (0000:00:10.0) NSID 1 from core 0: 15579.30 182.57 8226.89 6635.78 32024.87 00:07:27.074 PCIE (0000:00:11.0) NSID 1 from core 0: 15579.30 182.57 8214.45 6786.54 30198.67 00:07:27.074 PCIE (0000:00:13.0) NSID 1 from core 0: 15579.30 182.57 8201.64 6717.45 29054.84 00:07:27.074 PCIE (0000:00:12.0) NSID 1 from core 0: 15579.30 182.57 8189.03 6786.33 27270.40 00:07:27.074 PCIE (0000:00:12.0) NSID 2 from core 0: 15579.30 182.57 8176.36 6777.77 25544.35 00:07:27.074 PCIE (0000:00:12.0) NSID 3 from core 0: 15643.14 183.32 8130.38 6712.88 19959.79 00:07:27.074 ======================================================== 00:07:27.074 Total : 93539.62 1096.17 8189.75 6635.78 32024.87 00:07:27.074 00:07:27.074 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:27.074 ================================================================================= 00:07:27.074 1.00000% : 6956.898us 00:07:27.074 10.00000% : 7208.960us 00:07:27.074 25.00000% : 7461.022us 00:07:27.074 50.00000% : 7813.908us 00:07:27.074 75.00000% : 8318.031us 00:07:27.074 90.00000% : 9427.102us 00:07:27.074 95.00000% : 10334.523us 00:07:27.074 98.00000% : 11695.655us 00:07:27.074 99.00000% : 12552.665us 00:07:27.074 99.50000% : 26214.400us 00:07:27.074 99.90000% : 31658.929us 00:07:27.074 99.99000% : 32062.228us 00:07:27.074 99.99900% : 32062.228us 00:07:27.074 99.99990% : 32062.228us 00:07:27.074 99.99999% : 32062.228us 00:07:27.074 00:07:27.074 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:27.074 ================================================================================= 00:07:27.074 1.00000% : 7057.723us 00:07:27.074 10.00000% : 7360.197us 00:07:27.074 25.00000% : 7511.434us 00:07:27.074 50.00000% : 7813.908us 00:07:27.074 75.00000% : 8267.618us 00:07:27.074 90.00000% : 9427.102us 00:07:27.074 95.00000% : 10334.523us 00:07:27.074 98.00000% : 11645.243us 00:07:27.074 99.00000% : 12351.015us 00:07:27.074 99.50000% : 24500.382us 00:07:27.074 99.90000% : 29844.086us 00:07:27.074 99.99000% : 30247.385us 00:07:27.074 99.99900% : 30247.385us 00:07:27.074 99.99990% : 30247.385us 00:07:27.074 99.99999% : 30247.385us 00:07:27.074 00:07:27.074 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:27.074 ================================================================================= 00:07:27.074 1.00000% : 7057.723us 00:07:27.074 10.00000% : 7309.785us 00:07:27.074 25.00000% : 7511.434us 00:07:27.074 50.00000% : 7813.908us 00:07:27.074 75.00000% : 8267.618us 00:07:27.074 90.00000% : 9427.102us 00:07:27.074 95.00000% : 10485.760us 00:07:27.074 98.00000% : 11695.655us 00:07:27.074 99.00000% : 12451.840us 00:07:27.074 99.50000% : 23794.609us 00:07:27.074 99.90000% : 28835.840us 00:07:27.074 99.99000% : 29037.489us 00:07:27.074 99.99900% : 29239.138us 00:07:27.074 99.99990% : 29239.138us 00:07:27.074 99.99999% : 29239.138us 00:07:27.074 00:07:27.074 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:27.074 ================================================================================= 00:07:27.074 1.00000% : 7057.723us 00:07:27.074 10.00000% : 7309.785us 00:07:27.074 25.00000% : 7511.434us 00:07:27.074 50.00000% : 7813.908us 00:07:27.074 75.00000% : 8267.618us 00:07:27.074 90.00000% : 9427.102us 00:07:27.074 95.00000% : 10536.172us 00:07:27.074 98.00000% : 11544.418us 00:07:27.074 99.00000% : 12552.665us 00:07:27.074 99.50000% : 22080.591us 00:07:27.074 99.90000% : 27020.997us 00:07:27.074 99.99000% : 27424.295us 00:07:27.074 99.99900% : 27424.295us 00:07:27.075 99.99990% : 27424.295us 00:07:27.075 99.99999% : 27424.295us 00:07:27.075 00:07:27.075 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:27.075 ================================================================================= 00:07:27.075 1.00000% : 7057.723us 00:07:27.075 10.00000% : 7309.785us 00:07:27.075 25.00000% : 7511.434us 00:07:27.075 50.00000% : 7813.908us 00:07:27.075 75.00000% : 8267.618us 00:07:27.075 90.00000% : 9427.102us 00:07:27.075 95.00000% : 10485.760us 00:07:27.075 98.00000% : 11494.006us 00:07:27.075 99.00000% : 12552.665us 00:07:27.075 99.50000% : 20265.748us 00:07:27.075 99.90000% : 25206.154us 00:07:27.075 99.99000% : 25609.452us 00:07:27.075 99.99900% : 25609.452us 00:07:27.075 99.99990% : 25609.452us 00:07:27.075 99.99999% : 25609.452us 00:07:27.075 00:07:27.075 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:27.075 ================================================================================= 00:07:27.075 1.00000% : 7057.723us 00:07:27.075 10.00000% : 7309.785us 00:07:27.075 25.00000% : 7511.434us 00:07:27.075 50.00000% : 7813.908us 00:07:27.075 75.00000% : 8267.618us 00:07:27.075 90.00000% : 9427.102us 00:07:27.075 95.00000% : 10384.935us 00:07:27.075 98.00000% : 11594.831us 00:07:27.075 99.00000% : 12552.665us 00:07:27.075 99.50000% : 14317.095us 00:07:27.075 99.90000% : 19660.800us 00:07:27.075 99.99000% : 19963.274us 00:07:27.075 99.99900% : 19963.274us 00:07:27.075 99.99990% : 19963.274us 00:07:27.075 99.99999% : 19963.274us 00:07:27.075 00:07:27.075 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:27.075 ============================================================================== 00:07:27.075 Range in us Cumulative IO count 00:07:27.075 6604.012 - 6654.425: 0.0128% ( 2) 00:07:27.075 6654.425 - 6704.837: 0.0384% ( 4) 00:07:27.075 6704.837 - 6755.249: 0.1409% ( 16) 00:07:27.075 6755.249 - 6805.662: 0.2818% ( 22) 00:07:27.075 6805.662 - 6856.074: 0.5059% ( 35) 00:07:27.075 6856.074 - 6906.486: 0.6724% ( 26) 00:07:27.075 6906.486 - 6956.898: 1.0950% ( 66) 00:07:27.075 6956.898 - 7007.311: 1.9787% ( 138) 00:07:27.075 7007.311 - 7057.723: 3.3235% ( 210) 00:07:27.075 7057.723 - 7108.135: 5.3087% ( 310) 00:07:27.075 7108.135 - 7158.548: 7.6844% ( 371) 00:07:27.075 7158.548 - 7208.960: 10.4828% ( 437) 00:07:27.075 7208.960 - 7259.372: 13.5438% ( 478) 00:07:27.075 7259.372 - 7309.785: 16.7969% ( 508) 00:07:27.075 7309.785 - 7360.197: 19.9219% ( 488) 00:07:27.075 7360.197 - 7410.609: 22.8548% ( 458) 00:07:27.075 7410.609 - 7461.022: 25.8261% ( 464) 00:07:27.075 7461.022 - 7511.434: 28.9191% ( 483) 00:07:27.075 7511.434 - 7561.846: 32.2106% ( 514) 00:07:27.075 7561.846 - 7612.258: 35.8927% ( 575) 00:07:27.075 7612.258 - 7662.671: 39.3122% ( 534) 00:07:27.075 7662.671 - 7713.083: 43.0328% ( 581) 00:07:27.075 7713.083 - 7763.495: 46.6893% ( 571) 00:07:27.075 7763.495 - 7813.908: 50.2818% ( 561) 00:07:27.075 7813.908 - 7864.320: 53.6885% ( 532) 00:07:27.075 7864.320 - 7914.732: 57.0889% ( 531) 00:07:27.075 7914.732 - 7965.145: 60.1050% ( 471) 00:07:27.075 7965.145 - 8015.557: 62.8906% ( 435) 00:07:27.075 8015.557 - 8065.969: 65.1127% ( 347) 00:07:27.075 8065.969 - 8116.382: 67.3732% ( 353) 00:07:27.075 8116.382 - 8166.794: 69.7234% ( 367) 00:07:27.075 8166.794 - 8217.206: 71.8302% ( 329) 00:07:27.075 8217.206 - 8267.618: 73.5912% ( 275) 00:07:27.075 8267.618 - 8318.031: 75.1921% ( 250) 00:07:27.075 8318.031 - 8368.443: 76.3448% ( 180) 00:07:27.075 8368.443 - 8418.855: 77.6703% ( 207) 00:07:27.075 8418.855 - 8469.268: 78.8806% ( 189) 00:07:27.075 8469.268 - 8519.680: 79.9180% ( 162) 00:07:27.075 8519.680 - 8570.092: 81.0131% ( 171) 00:07:27.075 8570.092 - 8620.505: 81.8584% ( 132) 00:07:27.075 8620.505 - 8670.917: 82.6588% ( 125) 00:07:27.075 8670.917 - 8721.329: 83.2608% ( 94) 00:07:27.075 8721.329 - 8771.742: 83.8691% ( 95) 00:07:27.075 8771.742 - 8822.154: 84.3942% ( 82) 00:07:27.075 8822.154 - 8872.566: 84.9321% ( 84) 00:07:27.075 8872.566 - 8922.978: 85.4828% ( 86) 00:07:27.075 8922.978 - 8973.391: 86.0272% ( 85) 00:07:27.075 8973.391 - 9023.803: 86.4818% ( 71) 00:07:27.075 9023.803 - 9074.215: 86.9877% ( 79) 00:07:27.075 9074.215 - 9124.628: 87.4872% ( 78) 00:07:27.075 9124.628 - 9175.040: 87.9803% ( 77) 00:07:27.075 9175.040 - 9225.452: 88.4285% ( 70) 00:07:27.075 9225.452 - 9275.865: 88.8192% ( 61) 00:07:27.075 9275.865 - 9326.277: 89.2098% ( 61) 00:07:27.075 9326.277 - 9376.689: 89.6644% ( 71) 00:07:27.075 9376.689 - 9427.102: 90.0807% ( 65) 00:07:27.075 9427.102 - 9477.514: 90.4521% ( 58) 00:07:27.075 9477.514 - 9527.926: 90.8171% ( 57) 00:07:27.075 9527.926 - 9578.338: 91.1757% ( 56) 00:07:27.075 9578.338 - 9628.751: 91.5727% ( 62) 00:07:27.075 9628.751 - 9679.163: 91.9249% ( 55) 00:07:27.075 9679.163 - 9729.575: 92.1939% ( 42) 00:07:27.075 9729.575 - 9779.988: 92.5397% ( 54) 00:07:27.075 9779.988 - 9830.400: 92.8727% ( 52) 00:07:27.075 9830.400 - 9880.812: 93.1737% ( 47) 00:07:27.075 9880.812 - 9931.225: 93.4490% ( 43) 00:07:27.075 9931.225 - 9981.637: 93.7820% ( 52) 00:07:27.075 9981.637 - 10032.049: 94.0638% ( 44) 00:07:27.075 10032.049 - 10082.462: 94.2815% ( 34) 00:07:27.075 10082.462 - 10132.874: 94.4480% ( 26) 00:07:27.075 10132.874 - 10183.286: 94.6529% ( 32) 00:07:27.075 10183.286 - 10233.698: 94.8450% ( 30) 00:07:27.075 10233.698 - 10284.111: 94.9987% ( 24) 00:07:27.075 10284.111 - 10334.523: 95.1396% ( 22) 00:07:27.075 10334.523 - 10384.935: 95.2677% ( 20) 00:07:27.075 10384.935 - 10435.348: 95.3893% ( 19) 00:07:27.075 10435.348 - 10485.760: 95.4854% ( 15) 00:07:27.075 10485.760 - 10536.172: 95.5686% ( 13) 00:07:27.075 10536.172 - 10586.585: 95.6583% ( 14) 00:07:27.075 10586.585 - 10636.997: 95.7351% ( 12) 00:07:27.075 10636.997 - 10687.409: 95.8248% ( 14) 00:07:27.075 10687.409 - 10737.822: 96.0105% ( 29) 00:07:27.075 10737.822 - 10788.234: 96.1770% ( 26) 00:07:27.075 10788.234 - 10838.646: 96.3947% ( 34) 00:07:27.075 10838.646 - 10889.058: 96.5740% ( 28) 00:07:27.075 10889.058 - 10939.471: 96.6957% ( 19) 00:07:27.075 10939.471 - 10989.883: 96.7661% ( 11) 00:07:27.075 10989.883 - 11040.295: 96.8494% ( 13) 00:07:27.075 11040.295 - 11090.708: 96.9198% ( 11) 00:07:27.075 11090.708 - 11141.120: 97.0095% ( 14) 00:07:27.075 11141.120 - 11191.532: 97.0863% ( 12) 00:07:27.075 11191.532 - 11241.945: 97.1888% ( 16) 00:07:27.075 11241.945 - 11292.357: 97.2848% ( 15) 00:07:27.075 11292.357 - 11342.769: 97.3873% ( 16) 00:07:27.075 11342.769 - 11393.182: 97.4705% ( 13) 00:07:27.075 11393.182 - 11443.594: 97.5730% ( 16) 00:07:27.075 11443.594 - 11494.006: 97.6370% ( 10) 00:07:27.075 11494.006 - 11544.418: 97.7139% ( 12) 00:07:27.075 11544.418 - 11594.831: 97.8291% ( 18) 00:07:27.075 11594.831 - 11645.243: 97.9380% ( 17) 00:07:27.075 11645.243 - 11695.655: 98.0533% ( 18) 00:07:27.075 11695.655 - 11746.068: 98.1365% ( 13) 00:07:27.075 11746.068 - 11796.480: 98.1814% ( 7) 00:07:27.075 11796.480 - 11846.892: 98.2390% ( 9) 00:07:27.075 11846.892 - 11897.305: 98.2966% ( 9) 00:07:27.075 11897.305 - 11947.717: 98.3671% ( 11) 00:07:27.075 11947.717 - 11998.129: 98.4119% ( 7) 00:07:27.075 11998.129 - 12048.542: 98.4759% ( 10) 00:07:27.075 12048.542 - 12098.954: 98.5272% ( 8) 00:07:27.075 12098.954 - 12149.366: 98.5912% ( 10) 00:07:27.075 12149.366 - 12199.778: 98.6424% ( 8) 00:07:27.075 12199.778 - 12250.191: 98.7065% ( 10) 00:07:27.075 12250.191 - 12300.603: 98.7705% ( 10) 00:07:27.075 12300.603 - 12351.015: 98.8089% ( 6) 00:07:27.075 12351.015 - 12401.428: 98.8794% ( 11) 00:07:27.075 12401.428 - 12451.840: 98.9306% ( 8) 00:07:27.075 12451.840 - 12502.252: 98.9818% ( 8) 00:07:27.075 12502.252 - 12552.665: 99.0330% ( 8) 00:07:27.075 12552.665 - 12603.077: 99.0587% ( 4) 00:07:27.075 12603.077 - 12653.489: 99.0907% ( 5) 00:07:27.075 12653.489 - 12703.902: 99.1163% ( 4) 00:07:27.075 12703.902 - 12754.314: 99.1291% ( 2) 00:07:27.075 12754.314 - 12804.726: 99.1355% ( 1) 00:07:27.075 12804.726 - 12855.138: 99.1483% ( 2) 00:07:27.075 12855.138 - 12905.551: 99.1547% ( 1) 00:07:27.075 12905.551 - 13006.375: 99.1803% ( 4) 00:07:27.075 24903.680 - 25004.505: 99.1995% ( 3) 00:07:27.075 25004.505 - 25105.329: 99.2380% ( 6) 00:07:27.075 25105.329 - 25206.154: 99.2636% ( 4) 00:07:27.075 25206.154 - 25306.978: 99.2892% ( 4) 00:07:27.075 25306.978 - 25407.803: 99.3468% ( 9) 00:07:27.075 25407.803 - 25508.628: 99.3532% ( 1) 00:07:27.075 25508.628 - 25609.452: 99.3660% ( 2) 00:07:27.075 25609.452 - 25710.277: 99.3916% ( 4) 00:07:27.075 25710.277 - 25811.102: 99.4173% ( 4) 00:07:27.075 25811.102 - 26012.751: 99.4621% ( 7) 00:07:27.075 26012.751 - 26214.400: 99.5133% ( 8) 00:07:27.075 26214.400 - 26416.049: 99.5581% ( 7) 00:07:27.075 26416.049 - 26617.698: 99.5902% ( 5) 00:07:27.075 30045.735 - 30247.385: 99.5966% ( 1) 00:07:27.075 30247.385 - 30449.034: 99.6350% ( 6) 00:07:27.075 30449.034 - 30650.683: 99.6862% ( 8) 00:07:27.075 30650.683 - 30852.332: 99.7310% ( 7) 00:07:27.075 30852.332 - 31053.982: 99.7823% ( 8) 00:07:27.075 31053.982 - 31255.631: 99.8271% ( 7) 00:07:27.075 31255.631 - 31457.280: 99.8719% ( 7) 00:07:27.075 31457.280 - 31658.929: 99.9103% ( 6) 00:07:27.075 31658.929 - 31860.578: 99.9616% ( 8) 00:07:27.075 31860.578 - 32062.228: 100.0000% ( 6) 00:07:27.075 00:07:27.075 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:27.075 ============================================================================== 00:07:27.075 Range in us Cumulative IO count 00:07:27.075 6755.249 - 6805.662: 0.0192% ( 3) 00:07:27.075 6805.662 - 6856.074: 0.0897% ( 11) 00:07:27.075 6856.074 - 6906.486: 0.1857% ( 15) 00:07:27.076 6906.486 - 6956.898: 0.3586% ( 27) 00:07:27.076 6956.898 - 7007.311: 0.6788% ( 50) 00:07:27.076 7007.311 - 7057.723: 1.2487% ( 89) 00:07:27.076 7057.723 - 7108.135: 2.0620% ( 127) 00:07:27.076 7108.135 - 7158.548: 3.3299% ( 198) 00:07:27.076 7158.548 - 7208.960: 4.8540% ( 238) 00:07:27.076 7208.960 - 7259.372: 6.9992% ( 335) 00:07:27.076 7259.372 - 7309.785: 9.8873% ( 451) 00:07:27.076 7309.785 - 7360.197: 13.2941% ( 532) 00:07:27.076 7360.197 - 7410.609: 17.1555% ( 603) 00:07:27.076 7410.609 - 7461.022: 21.6445% ( 701) 00:07:27.076 7461.022 - 7511.434: 26.5753% ( 770) 00:07:27.076 7511.434 - 7561.846: 31.3909% ( 752) 00:07:27.076 7561.846 - 7612.258: 36.0528% ( 728) 00:07:27.076 7612.258 - 7662.671: 40.5353% ( 700) 00:07:27.076 7662.671 - 7713.083: 45.0115% ( 699) 00:07:27.076 7713.083 - 7763.495: 49.4429% ( 692) 00:07:27.076 7763.495 - 7813.908: 53.3747% ( 614) 00:07:27.076 7813.908 - 7864.320: 56.7047% ( 520) 00:07:27.076 7864.320 - 7914.732: 60.1178% ( 533) 00:07:27.076 7914.732 - 7965.145: 63.3069% ( 498) 00:07:27.076 7965.145 - 8015.557: 65.6890% ( 372) 00:07:27.076 8015.557 - 8065.969: 68.2505% ( 400) 00:07:27.076 8065.969 - 8116.382: 70.4214% ( 339) 00:07:27.076 8116.382 - 8166.794: 72.1632% ( 272) 00:07:27.076 8166.794 - 8217.206: 73.9306% ( 276) 00:07:27.076 8217.206 - 8267.618: 75.6468% ( 268) 00:07:27.076 8267.618 - 8318.031: 76.8891% ( 194) 00:07:27.076 8318.031 - 8368.443: 77.9265% ( 162) 00:07:27.076 8368.443 - 8418.855: 78.9703% ( 163) 00:07:27.076 8418.855 - 8469.268: 80.1294% ( 181) 00:07:27.076 8469.268 - 8519.680: 80.9042% ( 121) 00:07:27.076 8519.680 - 8570.092: 81.7175% ( 127) 00:07:27.076 8570.092 - 8620.505: 82.3578% ( 100) 00:07:27.076 8620.505 - 8670.917: 83.1839% ( 129) 00:07:27.076 8670.917 - 8721.329: 84.0356% ( 133) 00:07:27.076 8721.329 - 8771.742: 84.7784% ( 116) 00:07:27.076 8771.742 - 8822.154: 85.3612% ( 91) 00:07:27.076 8822.154 - 8872.566: 85.8350% ( 74) 00:07:27.076 8872.566 - 8922.978: 86.2449% ( 64) 00:07:27.076 8922.978 - 8973.391: 86.6163% ( 58) 00:07:27.076 8973.391 - 9023.803: 86.9813% ( 57) 00:07:27.076 9023.803 - 9074.215: 87.2567% ( 43) 00:07:27.076 9074.215 - 9124.628: 87.5832% ( 51) 00:07:27.076 9124.628 - 9175.040: 87.9483% ( 57) 00:07:27.076 9175.040 - 9225.452: 88.3709% ( 66) 00:07:27.076 9225.452 - 9275.865: 88.7871% ( 65) 00:07:27.076 9275.865 - 9326.277: 89.2290% ( 69) 00:07:27.076 9326.277 - 9376.689: 89.8181% ( 92) 00:07:27.076 9376.689 - 9427.102: 90.2024% ( 60) 00:07:27.076 9427.102 - 9477.514: 90.6058% ( 63) 00:07:27.076 9477.514 - 9527.926: 90.9516% ( 54) 00:07:27.076 9527.926 - 9578.338: 91.3230% ( 58) 00:07:27.076 9578.338 - 9628.751: 91.6048% ( 44) 00:07:27.076 9628.751 - 9679.163: 91.8225% ( 34) 00:07:27.076 9679.163 - 9729.575: 92.0338% ( 33) 00:07:27.076 9729.575 - 9779.988: 92.2707% ( 37) 00:07:27.076 9779.988 - 9830.400: 92.5845% ( 49) 00:07:27.076 9830.400 - 9880.812: 93.0264% ( 69) 00:07:27.076 9880.812 - 9931.225: 93.3338% ( 48) 00:07:27.076 9931.225 - 9981.637: 93.5963% ( 41) 00:07:27.076 9981.637 - 10032.049: 93.8845% ( 45) 00:07:27.076 10032.049 - 10082.462: 94.1086% ( 35) 00:07:27.076 10082.462 - 10132.874: 94.3199% ( 33) 00:07:27.076 10132.874 - 10183.286: 94.5056% ( 29) 00:07:27.076 10183.286 - 10233.698: 94.6593% ( 24) 00:07:27.076 10233.698 - 10284.111: 94.8386% ( 28) 00:07:27.076 10284.111 - 10334.523: 95.0884% ( 39) 00:07:27.076 10334.523 - 10384.935: 95.2549% ( 26) 00:07:27.076 10384.935 - 10435.348: 95.3893% ( 21) 00:07:27.076 10435.348 - 10485.760: 95.4854% ( 15) 00:07:27.076 10485.760 - 10536.172: 95.6519% ( 26) 00:07:27.076 10536.172 - 10586.585: 95.7736% ( 19) 00:07:27.076 10586.585 - 10636.997: 95.8824% ( 17) 00:07:27.076 10636.997 - 10687.409: 95.9977% ( 18) 00:07:27.076 10687.409 - 10737.822: 96.0873% ( 14) 00:07:27.076 10737.822 - 10788.234: 96.1770% ( 14) 00:07:27.076 10788.234 - 10838.646: 96.2666% ( 14) 00:07:27.076 10838.646 - 10889.058: 96.3307% ( 10) 00:07:27.076 10889.058 - 10939.471: 96.4011% ( 11) 00:07:27.076 10939.471 - 10989.883: 96.4972% ( 15) 00:07:27.076 10989.883 - 11040.295: 96.6381% ( 22) 00:07:27.076 11040.295 - 11090.708: 96.7725% ( 21) 00:07:27.076 11090.708 - 11141.120: 96.9006% ( 20) 00:07:27.076 11141.120 - 11191.532: 97.0223% ( 19) 00:07:27.076 11191.532 - 11241.945: 97.1183% ( 15) 00:07:27.076 11241.945 - 11292.357: 97.2080% ( 14) 00:07:27.076 11292.357 - 11342.769: 97.2976% ( 14) 00:07:27.076 11342.769 - 11393.182: 97.4898% ( 30) 00:07:27.076 11393.182 - 11443.594: 97.5666% ( 12) 00:07:27.076 11443.594 - 11494.006: 97.6627% ( 15) 00:07:27.076 11494.006 - 11544.418: 97.7587% ( 15) 00:07:27.076 11544.418 - 11594.831: 97.8868% ( 20) 00:07:27.076 11594.831 - 11645.243: 98.0020% ( 18) 00:07:27.076 11645.243 - 11695.655: 98.1109% ( 17) 00:07:27.076 11695.655 - 11746.068: 98.2006% ( 14) 00:07:27.076 11746.068 - 11796.480: 98.3030% ( 16) 00:07:27.076 11796.480 - 11846.892: 98.3735% ( 11) 00:07:27.076 11846.892 - 11897.305: 98.4183% ( 7) 00:07:27.076 11897.305 - 11947.717: 98.4695% ( 8) 00:07:27.076 11947.717 - 11998.129: 98.5464% ( 12) 00:07:27.076 11998.129 - 12048.542: 98.6360% ( 14) 00:07:27.076 12048.542 - 12098.954: 98.7065% ( 11) 00:07:27.076 12098.954 - 12149.366: 98.7769% ( 11) 00:07:27.076 12149.366 - 12199.778: 98.8537% ( 12) 00:07:27.076 12199.778 - 12250.191: 98.9178% ( 10) 00:07:27.076 12250.191 - 12300.603: 98.9754% ( 9) 00:07:27.076 12300.603 - 12351.015: 99.0266% ( 8) 00:07:27.076 12351.015 - 12401.428: 99.0587% ( 5) 00:07:27.076 12401.428 - 12451.840: 99.0715% ( 2) 00:07:27.076 12451.840 - 12502.252: 99.0907% ( 3) 00:07:27.076 12502.252 - 12552.665: 99.1099% ( 3) 00:07:27.076 12552.665 - 12603.077: 99.1227% ( 2) 00:07:27.076 12603.077 - 12653.489: 99.1483% ( 4) 00:07:27.076 12653.489 - 12703.902: 99.1739% ( 4) 00:07:27.076 12703.902 - 12754.314: 99.1803% ( 1) 00:07:27.076 23189.662 - 23290.486: 99.1867% ( 1) 00:07:27.076 23290.486 - 23391.311: 99.2188% ( 5) 00:07:27.076 23391.311 - 23492.135: 99.2380% ( 3) 00:07:27.076 23492.135 - 23592.960: 99.2700% ( 5) 00:07:27.076 23592.960 - 23693.785: 99.2956% ( 4) 00:07:27.076 23693.785 - 23794.609: 99.3212% ( 4) 00:07:27.076 23794.609 - 23895.434: 99.3468% ( 4) 00:07:27.076 23895.434 - 23996.258: 99.3788% ( 5) 00:07:27.076 23996.258 - 24097.083: 99.4045% ( 4) 00:07:27.076 24097.083 - 24197.908: 99.4301% ( 4) 00:07:27.076 24197.908 - 24298.732: 99.4557% ( 4) 00:07:27.076 24298.732 - 24399.557: 99.4877% ( 5) 00:07:27.076 24399.557 - 24500.382: 99.5133% ( 4) 00:07:27.076 24500.382 - 24601.206: 99.5325% ( 3) 00:07:27.076 24601.206 - 24702.031: 99.5645% ( 5) 00:07:27.076 24702.031 - 24802.855: 99.5902% ( 4) 00:07:27.076 28634.191 - 28835.840: 99.6478% ( 9) 00:07:27.076 28835.840 - 29037.489: 99.6926% ( 7) 00:07:27.076 29037.489 - 29239.138: 99.7503% ( 9) 00:07:27.076 29239.138 - 29440.788: 99.8079% ( 9) 00:07:27.076 29440.788 - 29642.437: 99.8591% ( 8) 00:07:27.076 29642.437 - 29844.086: 99.9039% ( 7) 00:07:27.076 29844.086 - 30045.735: 99.9552% ( 8) 00:07:27.076 30045.735 - 30247.385: 100.0000% ( 7) 00:07:27.076 00:07:27.076 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:27.076 ============================================================================== 00:07:27.076 Range in us Cumulative IO count 00:07:27.076 6704.837 - 6755.249: 0.0064% ( 1) 00:07:27.076 6755.249 - 6805.662: 0.0192% ( 2) 00:07:27.076 6805.662 - 6856.074: 0.0512% ( 5) 00:07:27.076 6856.074 - 6906.486: 0.2113% ( 25) 00:07:27.076 6906.486 - 6956.898: 0.5187% ( 48) 00:07:27.076 6956.898 - 7007.311: 0.9477% ( 67) 00:07:27.076 7007.311 - 7057.723: 1.5241% ( 90) 00:07:27.076 7057.723 - 7108.135: 2.4078% ( 138) 00:07:27.076 7108.135 - 7158.548: 3.7141% ( 204) 00:07:27.076 7158.548 - 7208.960: 5.5840% ( 292) 00:07:27.076 7208.960 - 7259.372: 8.1263% ( 397) 00:07:27.076 7259.372 - 7309.785: 11.2001% ( 480) 00:07:27.076 7309.785 - 7360.197: 14.7925% ( 561) 00:07:27.076 7360.197 - 7410.609: 18.9741% ( 653) 00:07:27.076 7410.609 - 7461.022: 23.3735% ( 687) 00:07:27.076 7461.022 - 7511.434: 27.5743% ( 656) 00:07:27.076 7511.434 - 7561.846: 31.7815% ( 657) 00:07:27.076 7561.846 - 7612.258: 36.1744% ( 686) 00:07:27.076 7612.258 - 7662.671: 40.4585% ( 669) 00:07:27.076 7662.671 - 7713.083: 44.7810% ( 675) 00:07:27.076 7713.083 - 7763.495: 48.7897% ( 626) 00:07:27.076 7763.495 - 7813.908: 53.2275% ( 693) 00:07:27.076 7813.908 - 7864.320: 57.2426% ( 627) 00:07:27.076 7864.320 - 7914.732: 60.3612% ( 487) 00:07:27.076 7914.732 - 7965.145: 63.3581% ( 468) 00:07:27.076 7965.145 - 8015.557: 66.0861% ( 426) 00:07:27.076 8015.557 - 8065.969: 68.5451% ( 384) 00:07:27.076 8065.969 - 8116.382: 70.8824% ( 365) 00:07:27.076 8116.382 - 8166.794: 73.0469% ( 338) 00:07:27.076 8166.794 - 8217.206: 74.8271% ( 278) 00:07:27.076 8217.206 - 8267.618: 76.3256% ( 234) 00:07:27.076 8267.618 - 8318.031: 77.9009% ( 246) 00:07:27.076 8318.031 - 8368.443: 79.1304% ( 192) 00:07:27.076 8368.443 - 8418.855: 80.0589% ( 145) 00:07:27.076 8418.855 - 8469.268: 80.9874% ( 145) 00:07:27.076 8469.268 - 8519.680: 81.8007% ( 127) 00:07:27.076 8519.680 - 8570.092: 82.5820% ( 122) 00:07:27.076 8570.092 - 8620.505: 83.3760% ( 124) 00:07:27.076 8620.505 - 8670.917: 83.9075% ( 83) 00:07:27.076 8670.917 - 8721.329: 84.3494% ( 69) 00:07:27.076 8721.329 - 8771.742: 84.7336% ( 60) 00:07:27.076 8771.742 - 8822.154: 85.2715% ( 84) 00:07:27.076 8822.154 - 8872.566: 85.7262% ( 71) 00:07:27.076 8872.566 - 8922.978: 86.2769% ( 86) 00:07:27.076 8922.978 - 8973.391: 86.6803% ( 63) 00:07:27.076 8973.391 - 9023.803: 87.0325% ( 55) 00:07:27.077 9023.803 - 9074.215: 87.4296% ( 62) 00:07:27.077 9074.215 - 9124.628: 87.7305% ( 47) 00:07:27.077 9124.628 - 9175.040: 88.0827% ( 55) 00:07:27.077 9175.040 - 9225.452: 88.4285% ( 54) 00:07:27.077 9225.452 - 9275.865: 88.8640% ( 68) 00:07:27.077 9275.865 - 9326.277: 89.2674% ( 63) 00:07:27.077 9326.277 - 9376.689: 89.6452% ( 59) 00:07:27.077 9376.689 - 9427.102: 90.0679% ( 66) 00:07:27.077 9427.102 - 9477.514: 90.4585% ( 61) 00:07:27.077 9477.514 - 9527.926: 90.7915% ( 52) 00:07:27.077 9527.926 - 9578.338: 91.2334% ( 69) 00:07:27.077 9578.338 - 9628.751: 91.6880% ( 71) 00:07:27.077 9628.751 - 9679.163: 92.0658% ( 59) 00:07:27.077 9679.163 - 9729.575: 92.4372% ( 58) 00:07:27.077 9729.575 - 9779.988: 92.6614% ( 35) 00:07:27.077 9779.988 - 9830.400: 92.8855% ( 35) 00:07:27.077 9830.400 - 9880.812: 93.0584% ( 27) 00:07:27.077 9880.812 - 9931.225: 93.2057% ( 23) 00:07:27.077 9931.225 - 9981.637: 93.3402% ( 21) 00:07:27.077 9981.637 - 10032.049: 93.4810% ( 22) 00:07:27.077 10032.049 - 10082.462: 93.6155% ( 21) 00:07:27.077 10082.462 - 10132.874: 93.7564% ( 22) 00:07:27.077 10132.874 - 10183.286: 93.9421% ( 29) 00:07:27.077 10183.286 - 10233.698: 94.1790% ( 37) 00:07:27.077 10233.698 - 10284.111: 94.4288% ( 39) 00:07:27.077 10284.111 - 10334.523: 94.6593% ( 36) 00:07:27.077 10334.523 - 10384.935: 94.8386% ( 28) 00:07:27.077 10384.935 - 10435.348: 94.9987% ( 25) 00:07:27.077 10435.348 - 10485.760: 95.2228% ( 35) 00:07:27.077 10485.760 - 10536.172: 95.3573% ( 21) 00:07:27.077 10536.172 - 10586.585: 95.4534% ( 15) 00:07:27.077 10586.585 - 10636.997: 95.5558% ( 16) 00:07:27.077 10636.997 - 10687.409: 95.6583% ( 16) 00:07:27.077 10687.409 - 10737.822: 95.7480% ( 14) 00:07:27.077 10737.822 - 10788.234: 95.8440% ( 15) 00:07:27.077 10788.234 - 10838.646: 95.8952% ( 8) 00:07:27.077 10838.646 - 10889.058: 95.9657% ( 11) 00:07:27.077 10889.058 - 10939.471: 96.0489% ( 13) 00:07:27.077 10939.471 - 10989.883: 96.1770% ( 20) 00:07:27.077 10989.883 - 11040.295: 96.3179% ( 22) 00:07:27.077 11040.295 - 11090.708: 96.4460% ( 20) 00:07:27.077 11090.708 - 11141.120: 96.5356% ( 14) 00:07:27.077 11141.120 - 11191.532: 96.7149% ( 28) 00:07:27.077 11191.532 - 11241.945: 96.8558% ( 22) 00:07:27.077 11241.945 - 11292.357: 96.9582% ( 16) 00:07:27.077 11292.357 - 11342.769: 97.0735% ( 18) 00:07:27.077 11342.769 - 11393.182: 97.1888% ( 18) 00:07:27.077 11393.182 - 11443.594: 97.3297% ( 22) 00:07:27.077 11443.594 - 11494.006: 97.4898% ( 25) 00:07:27.077 11494.006 - 11544.418: 97.6562% ( 26) 00:07:27.077 11544.418 - 11594.831: 97.7651% ( 17) 00:07:27.077 11594.831 - 11645.243: 97.8868% ( 19) 00:07:27.077 11645.243 - 11695.655: 98.0020% ( 18) 00:07:27.077 11695.655 - 11746.068: 98.1173% ( 18) 00:07:27.077 11746.068 - 11796.480: 98.2006% ( 13) 00:07:27.077 11796.480 - 11846.892: 98.2774% ( 12) 00:07:27.077 11846.892 - 11897.305: 98.3478% ( 11) 00:07:27.077 11897.305 - 11947.717: 98.4119% ( 10) 00:07:27.077 11947.717 - 11998.129: 98.5464% ( 21) 00:07:27.077 11998.129 - 12048.542: 98.6104% ( 10) 00:07:27.077 12048.542 - 12098.954: 98.6808% ( 11) 00:07:27.077 12098.954 - 12149.366: 98.7321% ( 8) 00:07:27.077 12149.366 - 12199.778: 98.7961% ( 10) 00:07:27.077 12199.778 - 12250.191: 98.8537% ( 9) 00:07:27.077 12250.191 - 12300.603: 98.8922% ( 6) 00:07:27.077 12300.603 - 12351.015: 98.9306% ( 6) 00:07:27.077 12351.015 - 12401.428: 98.9690% ( 6) 00:07:27.077 12401.428 - 12451.840: 99.0010% ( 5) 00:07:27.077 12451.840 - 12502.252: 99.0266% ( 4) 00:07:27.077 12502.252 - 12552.665: 99.0330% ( 1) 00:07:27.077 12552.665 - 12603.077: 99.0459% ( 2) 00:07:27.077 12603.077 - 12653.489: 99.0587% ( 2) 00:07:27.077 12653.489 - 12703.902: 99.0715% ( 2) 00:07:27.077 12703.902 - 12754.314: 99.0843% ( 2) 00:07:27.077 12754.314 - 12804.726: 99.0971% ( 2) 00:07:27.077 12804.726 - 12855.138: 99.1099% ( 2) 00:07:27.077 12855.138 - 12905.551: 99.1291% ( 3) 00:07:27.077 12905.551 - 13006.375: 99.1547% ( 4) 00:07:27.077 13006.375 - 13107.200: 99.1803% ( 4) 00:07:27.077 22282.240 - 22383.065: 99.1867% ( 1) 00:07:27.077 22383.065 - 22483.889: 99.2059% ( 3) 00:07:27.077 22483.889 - 22584.714: 99.2188% ( 2) 00:07:27.077 22584.714 - 22685.538: 99.2380% ( 3) 00:07:27.077 22685.538 - 22786.363: 99.2508% ( 2) 00:07:27.077 22786.363 - 22887.188: 99.2764% ( 4) 00:07:27.077 22887.188 - 22988.012: 99.3020% ( 4) 00:07:27.077 22988.012 - 23088.837: 99.3276% ( 4) 00:07:27.077 23088.837 - 23189.662: 99.3532% ( 4) 00:07:27.077 23189.662 - 23290.486: 99.3852% ( 5) 00:07:27.077 23290.486 - 23391.311: 99.4045% ( 3) 00:07:27.077 23391.311 - 23492.135: 99.4365% ( 5) 00:07:27.077 23492.135 - 23592.960: 99.4621% ( 4) 00:07:27.077 23592.960 - 23693.785: 99.4877% ( 4) 00:07:27.077 23693.785 - 23794.609: 99.5133% ( 4) 00:07:27.077 23794.609 - 23895.434: 99.5453% ( 5) 00:07:27.077 23895.434 - 23996.258: 99.5710% ( 4) 00:07:27.077 23996.258 - 24097.083: 99.5902% ( 3) 00:07:27.077 27424.295 - 27625.945: 99.6286% ( 6) 00:07:27.077 27625.945 - 27827.594: 99.6798% ( 8) 00:07:27.077 27827.594 - 28029.243: 99.7310% ( 8) 00:07:27.077 28029.243 - 28230.892: 99.7823% ( 8) 00:07:27.077 28230.892 - 28432.542: 99.8399% ( 9) 00:07:27.077 28432.542 - 28634.191: 99.8911% ( 8) 00:07:27.077 28634.191 - 28835.840: 99.9424% ( 8) 00:07:27.077 28835.840 - 29037.489: 99.9936% ( 8) 00:07:27.077 29037.489 - 29239.138: 100.0000% ( 1) 00:07:27.077 00:07:27.077 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:27.077 ============================================================================== 00:07:27.077 Range in us Cumulative IO count 00:07:27.077 6755.249 - 6805.662: 0.0128% ( 2) 00:07:27.077 6805.662 - 6856.074: 0.0256% ( 2) 00:07:27.077 6856.074 - 6906.486: 0.0448% ( 3) 00:07:27.077 6906.486 - 6956.898: 0.1537% ( 17) 00:07:27.077 6956.898 - 7007.311: 0.5059% ( 55) 00:07:27.077 7007.311 - 7057.723: 1.0822% ( 90) 00:07:27.077 7057.723 - 7108.135: 2.0236% ( 147) 00:07:27.077 7108.135 - 7158.548: 3.4132% ( 217) 00:07:27.077 7158.548 - 7208.960: 5.3087% ( 296) 00:07:27.077 7208.960 - 7259.372: 8.0879% ( 434) 00:07:27.077 7259.372 - 7309.785: 11.4114% ( 519) 00:07:27.077 7309.785 - 7360.197: 15.2280% ( 596) 00:07:27.077 7360.197 - 7410.609: 19.1278% ( 609) 00:07:27.077 7410.609 - 7461.022: 23.4503% ( 675) 00:07:27.077 7461.022 - 7511.434: 28.1890% ( 740) 00:07:27.077 7511.434 - 7561.846: 33.0686% ( 762) 00:07:27.077 7561.846 - 7612.258: 37.7369% ( 729) 00:07:27.077 7612.258 - 7662.671: 42.0850% ( 679) 00:07:27.077 7662.671 - 7713.083: 46.0169% ( 614) 00:07:27.077 7713.083 - 7763.495: 49.4941% ( 543) 00:07:27.077 7763.495 - 7813.908: 53.1442% ( 570) 00:07:27.077 7813.908 - 7864.320: 56.5318% ( 529) 00:07:27.077 7864.320 - 7914.732: 59.6696% ( 490) 00:07:27.077 7914.732 - 7965.145: 62.2631% ( 405) 00:07:27.077 7965.145 - 8015.557: 64.8566% ( 405) 00:07:27.077 8015.557 - 8065.969: 67.3284% ( 386) 00:07:27.077 8065.969 - 8116.382: 69.6657% ( 365) 00:07:27.077 8116.382 - 8166.794: 71.9262% ( 353) 00:07:27.077 8166.794 - 8217.206: 74.2123% ( 357) 00:07:27.077 8217.206 - 8267.618: 75.8325% ( 253) 00:07:27.077 8267.618 - 8318.031: 77.3373% ( 235) 00:07:27.077 8318.031 - 8368.443: 78.5733% ( 193) 00:07:27.077 8368.443 - 8418.855: 79.7772% ( 188) 00:07:27.077 8418.855 - 8469.268: 80.9106% ( 177) 00:07:27.077 8469.268 - 8519.680: 81.7175% ( 126) 00:07:27.077 8519.680 - 8570.092: 82.3835% ( 104) 00:07:27.077 8570.092 - 8620.505: 82.9918% ( 95) 00:07:27.077 8620.505 - 8670.917: 83.4913% ( 78) 00:07:27.077 8670.917 - 8721.329: 84.1124% ( 97) 00:07:27.077 8721.329 - 8771.742: 84.6311% ( 81) 00:07:27.077 8771.742 - 8822.154: 85.0986% ( 73) 00:07:27.077 8822.154 - 8872.566: 85.5661% ( 73) 00:07:27.077 8872.566 - 8922.978: 86.0143% ( 70) 00:07:27.077 8922.978 - 8973.391: 86.4818% ( 73) 00:07:27.077 8973.391 - 9023.803: 86.8724% ( 61) 00:07:27.077 9023.803 - 9074.215: 87.2503% ( 59) 00:07:27.077 9074.215 - 9124.628: 87.5897% ( 53) 00:07:27.077 9124.628 - 9175.040: 88.0699% ( 75) 00:07:27.077 9175.040 - 9225.452: 88.4541% ( 60) 00:07:27.077 9225.452 - 9275.865: 88.7743% ( 50) 00:07:27.077 9275.865 - 9326.277: 89.0753% ( 47) 00:07:27.077 9326.277 - 9376.689: 89.5748% ( 78) 00:07:27.077 9376.689 - 9427.102: 90.1063% ( 83) 00:07:27.077 9427.102 - 9477.514: 90.5546% ( 70) 00:07:27.077 9477.514 - 9527.926: 90.9452% ( 61) 00:07:27.077 9527.926 - 9578.338: 91.3038% ( 56) 00:07:27.077 9578.338 - 9628.751: 91.6560% ( 55) 00:07:27.077 9628.751 - 9679.163: 92.1043% ( 70) 00:07:27.077 9679.163 - 9729.575: 92.4629% ( 56) 00:07:27.077 9729.575 - 9779.988: 92.6614% ( 31) 00:07:27.077 9779.988 - 9830.400: 92.8215% ( 25) 00:07:27.077 9830.400 - 9880.812: 92.9816% ( 25) 00:07:27.077 9880.812 - 9931.225: 93.1609% ( 28) 00:07:27.077 9931.225 - 9981.637: 93.3274% ( 26) 00:07:27.077 9981.637 - 10032.049: 93.4682% ( 22) 00:07:27.077 10032.049 - 10082.462: 93.6219% ( 24) 00:07:27.077 10082.462 - 10132.874: 93.7756% ( 24) 00:07:27.077 10132.874 - 10183.286: 93.8717% ( 15) 00:07:27.077 10183.286 - 10233.698: 93.9805% ( 17) 00:07:27.077 10233.698 - 10284.111: 94.1150% ( 21) 00:07:27.077 10284.111 - 10334.523: 94.3071% ( 30) 00:07:27.077 10334.523 - 10384.935: 94.5633% ( 40) 00:07:27.077 10384.935 - 10435.348: 94.8002% ( 37) 00:07:27.077 10435.348 - 10485.760: 94.9731% ( 27) 00:07:27.077 10485.760 - 10536.172: 95.1332% ( 25) 00:07:27.077 10536.172 - 10586.585: 95.2421% ( 17) 00:07:27.077 10586.585 - 10636.997: 95.3637% ( 19) 00:07:27.077 10636.997 - 10687.409: 95.4726% ( 17) 00:07:27.077 10687.409 - 10737.822: 95.5879% ( 18) 00:07:27.077 10737.822 - 10788.234: 95.7159% ( 20) 00:07:27.078 10788.234 - 10838.646: 95.9465% ( 36) 00:07:27.078 10838.646 - 10889.058: 96.2218% ( 43) 00:07:27.078 10889.058 - 10939.471: 96.4588% ( 37) 00:07:27.078 10939.471 - 10989.883: 96.6637% ( 32) 00:07:27.078 10989.883 - 11040.295: 96.8686% ( 32) 00:07:27.078 11040.295 - 11090.708: 97.0159% ( 23) 00:07:27.078 11090.708 - 11141.120: 97.1632% ( 23) 00:07:27.078 11141.120 - 11191.532: 97.3040% ( 22) 00:07:27.078 11191.532 - 11241.945: 97.4385% ( 21) 00:07:27.078 11241.945 - 11292.357: 97.5474% ( 17) 00:07:27.078 11292.357 - 11342.769: 97.6370% ( 14) 00:07:27.078 11342.769 - 11393.182: 97.7267% ( 14) 00:07:27.078 11393.182 - 11443.594: 97.8227% ( 15) 00:07:27.078 11443.594 - 11494.006: 97.9124% ( 14) 00:07:27.078 11494.006 - 11544.418: 98.0020% ( 14) 00:07:27.078 11544.418 - 11594.831: 98.0853% ( 13) 00:07:27.078 11594.831 - 11645.243: 98.1557% ( 11) 00:07:27.078 11645.243 - 11695.655: 98.2326% ( 12) 00:07:27.078 11695.655 - 11746.068: 98.2518% ( 3) 00:07:27.078 11746.068 - 11796.480: 98.2902% ( 6) 00:07:27.078 11796.480 - 11846.892: 98.3350% ( 7) 00:07:27.078 11846.892 - 11897.305: 98.3671% ( 5) 00:07:27.078 11897.305 - 11947.717: 98.3991% ( 5) 00:07:27.078 11947.717 - 11998.129: 98.4375% ( 6) 00:07:27.078 11998.129 - 12048.542: 98.4631% ( 4) 00:07:27.078 12048.542 - 12098.954: 98.4823% ( 3) 00:07:27.078 12098.954 - 12149.366: 98.5143% ( 5) 00:07:27.078 12149.366 - 12199.778: 98.5592% ( 7) 00:07:27.078 12199.778 - 12250.191: 98.6104% ( 8) 00:07:27.078 12250.191 - 12300.603: 98.6552% ( 7) 00:07:27.078 12300.603 - 12351.015: 98.7065% ( 8) 00:07:27.078 12351.015 - 12401.428: 98.7513% ( 7) 00:07:27.078 12401.428 - 12451.840: 98.8601% ( 17) 00:07:27.078 12451.840 - 12502.252: 98.9434% ( 13) 00:07:27.078 12502.252 - 12552.665: 99.0330% ( 14) 00:07:27.078 12552.665 - 12603.077: 99.0587% ( 4) 00:07:27.078 12603.077 - 12653.489: 99.0907% ( 5) 00:07:27.078 12653.489 - 12703.902: 99.1227% ( 5) 00:07:27.078 12703.902 - 12754.314: 99.1611% ( 6) 00:07:27.078 12754.314 - 12804.726: 99.1803% ( 3) 00:07:27.078 20669.046 - 20769.871: 99.1867% ( 1) 00:07:27.078 20769.871 - 20870.695: 99.2188% ( 5) 00:07:27.078 20870.695 - 20971.520: 99.2444% ( 4) 00:07:27.078 20971.520 - 21072.345: 99.2700% ( 4) 00:07:27.078 21072.345 - 21173.169: 99.2956% ( 4) 00:07:27.078 21173.169 - 21273.994: 99.3212% ( 4) 00:07:27.078 21273.994 - 21374.818: 99.3468% ( 4) 00:07:27.078 21374.818 - 21475.643: 99.3724% ( 4) 00:07:27.078 21475.643 - 21576.468: 99.3981% ( 4) 00:07:27.078 21576.468 - 21677.292: 99.4237% ( 4) 00:07:27.078 21677.292 - 21778.117: 99.4429% ( 3) 00:07:27.078 21778.117 - 21878.942: 99.4685% ( 4) 00:07:27.078 21878.942 - 21979.766: 99.4941% ( 4) 00:07:27.078 21979.766 - 22080.591: 99.5197% ( 4) 00:07:27.078 22080.591 - 22181.415: 99.5453% ( 4) 00:07:27.078 22181.415 - 22282.240: 99.5710% ( 4) 00:07:27.078 22282.240 - 22383.065: 99.5902% ( 3) 00:07:27.078 25710.277 - 25811.102: 99.6094% ( 3) 00:07:27.078 25811.102 - 26012.751: 99.6606% ( 8) 00:07:27.078 26012.751 - 26214.400: 99.7118% ( 8) 00:07:27.078 26214.400 - 26416.049: 99.7695% ( 9) 00:07:27.078 26416.049 - 26617.698: 99.8207% ( 8) 00:07:27.078 26617.698 - 26819.348: 99.8719% ( 8) 00:07:27.078 26819.348 - 27020.997: 99.9296% ( 9) 00:07:27.078 27020.997 - 27222.646: 99.9808% ( 8) 00:07:27.078 27222.646 - 27424.295: 100.0000% ( 3) 00:07:27.078 00:07:27.078 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:27.078 ============================================================================== 00:07:27.078 Range in us Cumulative IO count 00:07:27.078 6755.249 - 6805.662: 0.0256% ( 4) 00:07:27.078 6805.662 - 6856.074: 0.0768% ( 8) 00:07:27.078 6856.074 - 6906.486: 0.1281% ( 8) 00:07:27.078 6906.486 - 6956.898: 0.2497% ( 19) 00:07:27.078 6956.898 - 7007.311: 0.5187% ( 42) 00:07:27.078 7007.311 - 7057.723: 1.0310% ( 80) 00:07:27.078 7057.723 - 7108.135: 1.7034% ( 105) 00:07:27.078 7108.135 - 7158.548: 2.9265% ( 191) 00:07:27.078 7158.548 - 7208.960: 4.9372% ( 314) 00:07:27.078 7208.960 - 7259.372: 7.3002% ( 369) 00:07:27.078 7259.372 - 7309.785: 10.5533% ( 508) 00:07:27.078 7309.785 - 7360.197: 14.2290% ( 574) 00:07:27.078 7360.197 - 7410.609: 18.5835% ( 680) 00:07:27.078 7410.609 - 7461.022: 23.2070% ( 722) 00:07:27.078 7461.022 - 7511.434: 28.0418% ( 755) 00:07:27.078 7511.434 - 7561.846: 32.5564% ( 705) 00:07:27.078 7561.846 - 7612.258: 37.0325% ( 699) 00:07:27.078 7612.258 - 7662.671: 41.0540% ( 628) 00:07:27.078 7662.671 - 7713.083: 45.1524% ( 640) 00:07:27.078 7713.083 - 7763.495: 48.9562% ( 594) 00:07:27.078 7763.495 - 7813.908: 52.8689% ( 611) 00:07:27.078 7813.908 - 7864.320: 56.7303% ( 603) 00:07:27.078 7864.320 - 7914.732: 59.8169% ( 482) 00:07:27.078 7914.732 - 7965.145: 62.4488% ( 411) 00:07:27.078 7965.145 - 8015.557: 64.8950% ( 382) 00:07:27.078 8015.557 - 8065.969: 67.4116% ( 393) 00:07:27.078 8065.969 - 8116.382: 69.4544% ( 319) 00:07:27.078 8116.382 - 8166.794: 71.5164% ( 322) 00:07:27.078 8166.794 - 8217.206: 73.4695% ( 305) 00:07:27.078 8217.206 - 8267.618: 75.2113% ( 272) 00:07:27.078 8267.618 - 8318.031: 76.5561% ( 210) 00:07:27.078 8318.031 - 8368.443: 77.8112% ( 196) 00:07:27.078 8368.443 - 8418.855: 79.1048% ( 202) 00:07:27.078 8418.855 - 8469.268: 80.3919% ( 201) 00:07:27.078 8469.268 - 8519.680: 81.5574% ( 182) 00:07:27.078 8519.680 - 8570.092: 82.3835% ( 129) 00:07:27.078 8570.092 - 8620.505: 82.8893% ( 79) 00:07:27.078 8620.505 - 8670.917: 83.4785% ( 92) 00:07:27.078 8670.917 - 8721.329: 83.9395% ( 72) 00:07:27.078 8721.329 - 8771.742: 84.4070% ( 73) 00:07:27.078 8771.742 - 8822.154: 84.8617% ( 71) 00:07:27.078 8822.154 - 8872.566: 85.2011% ( 53) 00:07:27.078 8872.566 - 8922.978: 85.5853% ( 60) 00:07:27.078 8922.978 - 8973.391: 85.9823% ( 62) 00:07:27.078 8973.391 - 9023.803: 86.4370% ( 71) 00:07:27.078 9023.803 - 9074.215: 86.9109% ( 74) 00:07:27.078 9074.215 - 9124.628: 87.3911% ( 75) 00:07:27.078 9124.628 - 9175.040: 87.8778% ( 76) 00:07:27.078 9175.040 - 9225.452: 88.4221% ( 85) 00:07:27.078 9225.452 - 9275.865: 88.8704% ( 70) 00:07:27.078 9275.865 - 9326.277: 89.3443% ( 74) 00:07:27.078 9326.277 - 9376.689: 89.9014% ( 87) 00:07:27.078 9376.689 - 9427.102: 90.5033% ( 94) 00:07:27.078 9427.102 - 9477.514: 91.0220% ( 81) 00:07:27.078 9477.514 - 9527.926: 91.3806% ( 56) 00:07:27.078 9527.926 - 9578.338: 91.7008% ( 50) 00:07:27.078 9578.338 - 9628.751: 91.9570% ( 40) 00:07:27.078 9628.751 - 9679.163: 92.2259% ( 42) 00:07:27.078 9679.163 - 9729.575: 92.6230% ( 62) 00:07:27.078 9729.575 - 9779.988: 92.9047% ( 44) 00:07:27.078 9779.988 - 9830.400: 93.1609% ( 40) 00:07:27.078 9830.400 - 9880.812: 93.3850% ( 35) 00:07:27.078 9880.812 - 9931.225: 93.5707% ( 29) 00:07:27.078 9931.225 - 9981.637: 93.7244% ( 24) 00:07:27.078 9981.637 - 10032.049: 93.8653% ( 22) 00:07:27.078 10032.049 - 10082.462: 94.0061% ( 22) 00:07:27.078 10082.462 - 10132.874: 94.1150% ( 17) 00:07:27.078 10132.874 - 10183.286: 94.2367% ( 19) 00:07:27.078 10183.286 - 10233.698: 94.3327% ( 15) 00:07:27.078 10233.698 - 10284.111: 94.4608% ( 20) 00:07:27.078 10284.111 - 10334.523: 94.5889% ( 20) 00:07:27.078 10334.523 - 10384.935: 94.7426% ( 24) 00:07:27.078 10384.935 - 10435.348: 94.9475% ( 32) 00:07:27.078 10435.348 - 10485.760: 95.1332% ( 29) 00:07:27.078 10485.760 - 10536.172: 95.3445% ( 33) 00:07:27.078 10536.172 - 10586.585: 95.5622% ( 34) 00:07:27.078 10586.585 - 10636.997: 95.7415% ( 28) 00:07:27.078 10636.997 - 10687.409: 95.9273% ( 29) 00:07:27.078 10687.409 - 10737.822: 96.0873% ( 25) 00:07:27.078 10737.822 - 10788.234: 96.2538% ( 26) 00:07:27.078 10788.234 - 10838.646: 96.4331% ( 28) 00:07:27.078 10838.646 - 10889.058: 96.5292% ( 15) 00:07:27.078 10889.058 - 10939.471: 96.6573% ( 20) 00:07:27.078 10939.471 - 10989.883: 96.7597% ( 16) 00:07:27.078 10989.883 - 11040.295: 96.8878% ( 20) 00:07:27.078 11040.295 - 11090.708: 96.9967% ( 17) 00:07:27.078 11090.708 - 11141.120: 97.1824% ( 29) 00:07:27.078 11141.120 - 11191.532: 97.3745% ( 30) 00:07:27.078 11191.532 - 11241.945: 97.5410% ( 26) 00:07:27.078 11241.945 - 11292.357: 97.7075% ( 26) 00:07:27.078 11292.357 - 11342.769: 97.8227% ( 18) 00:07:27.078 11342.769 - 11393.182: 97.9188% ( 15) 00:07:27.078 11393.182 - 11443.594: 97.9828% ( 10) 00:07:27.078 11443.594 - 11494.006: 98.0213% ( 6) 00:07:27.078 11494.006 - 11544.418: 98.0597% ( 6) 00:07:27.078 11544.418 - 11594.831: 98.1045% ( 7) 00:07:27.078 11594.831 - 11645.243: 98.1493% ( 7) 00:07:27.078 11645.243 - 11695.655: 98.1878% ( 6) 00:07:27.078 11695.655 - 11746.068: 98.2326% ( 7) 00:07:27.078 11746.068 - 11796.480: 98.2774% ( 7) 00:07:27.078 11796.480 - 11846.892: 98.3286% ( 8) 00:07:27.078 11846.892 - 11897.305: 98.3543% ( 4) 00:07:27.078 11897.305 - 11947.717: 98.3671% ( 2) 00:07:27.078 11998.129 - 12048.542: 98.3863% ( 3) 00:07:27.078 12048.542 - 12098.954: 98.4119% ( 4) 00:07:27.078 12098.954 - 12149.366: 98.4439% ( 5) 00:07:27.078 12149.366 - 12199.778: 98.5015% ( 9) 00:07:27.078 12199.778 - 12250.191: 98.5592% ( 9) 00:07:27.078 12250.191 - 12300.603: 98.6616% ( 16) 00:07:27.078 12300.603 - 12351.015: 98.7705% ( 17) 00:07:27.078 12351.015 - 12401.428: 98.8409% ( 11) 00:07:27.078 12401.428 - 12451.840: 98.9370% ( 15) 00:07:27.078 12451.840 - 12502.252: 98.9946% ( 9) 00:07:27.078 12502.252 - 12552.665: 99.0330% ( 6) 00:07:27.078 12552.665 - 12603.077: 99.0779% ( 7) 00:07:27.078 12603.077 - 12653.489: 99.1163% ( 6) 00:07:27.078 12653.489 - 12703.902: 99.1355% ( 3) 00:07:27.078 12703.902 - 12754.314: 99.1547% ( 3) 00:07:27.078 12754.314 - 12804.726: 99.1739% ( 3) 00:07:27.078 12804.726 - 12855.138: 99.1803% ( 1) 00:07:27.079 18955.028 - 19055.852: 99.1931% ( 2) 00:07:27.079 19055.852 - 19156.677: 99.2188% ( 4) 00:07:27.079 19156.677 - 19257.502: 99.2444% ( 4) 00:07:27.079 19257.502 - 19358.326: 99.2764% ( 5) 00:07:27.079 19358.326 - 19459.151: 99.3020% ( 4) 00:07:27.079 19459.151 - 19559.975: 99.3276% ( 4) 00:07:27.079 19559.975 - 19660.800: 99.3532% ( 4) 00:07:27.079 19660.800 - 19761.625: 99.3788% ( 4) 00:07:27.079 19761.625 - 19862.449: 99.4045% ( 4) 00:07:27.079 19862.449 - 19963.274: 99.4301% ( 4) 00:07:27.079 19963.274 - 20064.098: 99.4557% ( 4) 00:07:27.079 20064.098 - 20164.923: 99.4813% ( 4) 00:07:27.079 20164.923 - 20265.748: 99.5069% ( 4) 00:07:27.079 20265.748 - 20366.572: 99.5325% ( 4) 00:07:27.079 20366.572 - 20467.397: 99.5581% ( 4) 00:07:27.079 20467.397 - 20568.222: 99.5838% ( 4) 00:07:27.079 20568.222 - 20669.046: 99.5902% ( 1) 00:07:27.079 23895.434 - 23996.258: 99.6030% ( 2) 00:07:27.079 23996.258 - 24097.083: 99.6286% ( 4) 00:07:27.079 24097.083 - 24197.908: 99.6542% ( 4) 00:07:27.079 24197.908 - 24298.732: 99.6798% ( 4) 00:07:27.079 24298.732 - 24399.557: 99.7054% ( 4) 00:07:27.079 24399.557 - 24500.382: 99.7310% ( 4) 00:07:27.079 24500.382 - 24601.206: 99.7567% ( 4) 00:07:27.079 24601.206 - 24702.031: 99.7823% ( 4) 00:07:27.079 24702.031 - 24802.855: 99.8079% ( 4) 00:07:27.079 24802.855 - 24903.680: 99.8335% ( 4) 00:07:27.079 24903.680 - 25004.505: 99.8591% ( 4) 00:07:27.079 25004.505 - 25105.329: 99.8847% ( 4) 00:07:27.079 25105.329 - 25206.154: 99.9103% ( 4) 00:07:27.079 25206.154 - 25306.978: 99.9360% ( 4) 00:07:27.079 25306.978 - 25407.803: 99.9616% ( 4) 00:07:27.079 25407.803 - 25508.628: 99.9872% ( 4) 00:07:27.079 25508.628 - 25609.452: 100.0000% ( 2) 00:07:27.079 00:07:27.079 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:27.079 ============================================================================== 00:07:27.079 Range in us Cumulative IO count 00:07:27.079 6704.837 - 6755.249: 0.0064% ( 1) 00:07:27.079 6755.249 - 6805.662: 0.0510% ( 7) 00:07:27.079 6805.662 - 6856.074: 0.1403% ( 14) 00:07:27.079 6856.074 - 6906.486: 0.2360% ( 15) 00:07:27.079 6906.486 - 6956.898: 0.4145% ( 28) 00:07:27.079 6956.898 - 7007.311: 0.6505% ( 37) 00:07:27.079 7007.311 - 7057.723: 1.0842% ( 68) 00:07:27.079 7057.723 - 7108.135: 1.8559% ( 121) 00:07:27.079 7108.135 - 7158.548: 2.9018% ( 164) 00:07:27.079 7158.548 - 7208.960: 4.6492% ( 274) 00:07:27.079 7208.960 - 7259.372: 7.1173% ( 387) 00:07:27.079 7259.372 - 7309.785: 10.0638% ( 462) 00:07:27.079 7309.785 - 7360.197: 13.4758% ( 535) 00:07:27.079 7360.197 - 7410.609: 17.7232% ( 666) 00:07:27.079 7410.609 - 7461.022: 22.0918% ( 685) 00:07:27.079 7461.022 - 7511.434: 26.8686% ( 749) 00:07:27.079 7511.434 - 7561.846: 31.5306% ( 731) 00:07:27.079 7561.846 - 7612.258: 36.1352% ( 722) 00:07:27.079 7612.258 - 7662.671: 40.9949% ( 762) 00:07:27.079 7662.671 - 7713.083: 44.9235% ( 616) 00:07:27.079 7713.083 - 7763.495: 48.9158% ( 626) 00:07:27.079 7763.495 - 7813.908: 52.7232% ( 597) 00:07:27.079 7813.908 - 7864.320: 55.9566% ( 507) 00:07:27.079 7864.320 - 7914.732: 58.9286% ( 466) 00:07:27.079 7914.732 - 7965.145: 61.8367% ( 456) 00:07:27.079 7965.145 - 8015.557: 64.9490% ( 488) 00:07:27.079 8015.557 - 8065.969: 67.3214% ( 372) 00:07:27.079 8065.969 - 8116.382: 69.9617% ( 414) 00:07:27.079 8116.382 - 8166.794: 72.2385% ( 357) 00:07:27.079 8166.794 - 8217.206: 74.2219% ( 311) 00:07:27.079 8217.206 - 8267.618: 75.6633% ( 226) 00:07:27.079 8267.618 - 8318.031: 77.1301% ( 230) 00:07:27.079 8318.031 - 8368.443: 78.3227% ( 187) 00:07:27.079 8368.443 - 8418.855: 79.4324% ( 174) 00:07:27.079 8418.855 - 8469.268: 80.3763% ( 148) 00:07:27.079 8469.268 - 8519.680: 81.3839% ( 158) 00:07:27.079 8519.680 - 8570.092: 82.0281% ( 101) 00:07:27.079 8570.092 - 8620.505: 82.5957% ( 89) 00:07:27.079 8620.505 - 8670.917: 83.1824% ( 92) 00:07:27.079 8670.917 - 8721.329: 83.7054% ( 82) 00:07:27.079 8721.329 - 8771.742: 84.3112% ( 95) 00:07:27.079 8771.742 - 8822.154: 84.8406% ( 83) 00:07:27.079 8822.154 - 8872.566: 85.2997% ( 72) 00:07:27.079 8872.566 - 8922.978: 85.7781% ( 75) 00:07:27.079 8922.978 - 8973.391: 86.0906% ( 49) 00:07:27.079 8973.391 - 9023.803: 86.4860% ( 62) 00:07:27.079 9023.803 - 9074.215: 86.8048% ( 50) 00:07:27.079 9074.215 - 9124.628: 87.2513% ( 70) 00:07:27.079 9124.628 - 9175.040: 87.6339% ( 60) 00:07:27.079 9175.040 - 9225.452: 88.0230% ( 61) 00:07:27.079 9225.452 - 9275.865: 88.5459% ( 82) 00:07:27.079 9275.865 - 9326.277: 89.1454% ( 94) 00:07:27.079 9326.277 - 9376.689: 89.7449% ( 94) 00:07:27.079 9376.689 - 9427.102: 90.5230% ( 122) 00:07:27.079 9427.102 - 9477.514: 90.9885% ( 73) 00:07:27.079 9477.514 - 9527.926: 91.5944% ( 95) 00:07:27.079 9527.926 - 9578.338: 92.0663% ( 74) 00:07:27.079 9578.338 - 9628.751: 92.4171% ( 55) 00:07:27.079 9628.751 - 9679.163: 92.7870% ( 58) 00:07:27.079 9679.163 - 9729.575: 93.0867% ( 47) 00:07:27.079 9729.575 - 9779.988: 93.3610% ( 43) 00:07:27.079 9779.988 - 9830.400: 93.5651% ( 32) 00:07:27.079 9830.400 - 9880.812: 93.7883% ( 35) 00:07:27.079 9880.812 - 9931.225: 93.9860% ( 31) 00:07:27.079 9931.225 - 9981.637: 94.1901% ( 32) 00:07:27.079 9981.637 - 10032.049: 94.3814% ( 30) 00:07:27.079 10032.049 - 10082.462: 94.5281% ( 23) 00:07:27.079 10082.462 - 10132.874: 94.6365% ( 17) 00:07:27.079 10132.874 - 10183.286: 94.7385% ( 16) 00:07:27.079 10183.286 - 10233.698: 94.8151% ( 12) 00:07:27.079 10233.698 - 10284.111: 94.8533% ( 6) 00:07:27.079 10284.111 - 10334.523: 94.9490% ( 15) 00:07:27.079 10334.523 - 10384.935: 95.0829% ( 21) 00:07:27.079 10384.935 - 10435.348: 95.2041% ( 19) 00:07:27.079 10435.348 - 10485.760: 95.3189% ( 18) 00:07:27.079 10485.760 - 10536.172: 95.4018% ( 13) 00:07:27.079 10536.172 - 10586.585: 95.4847% ( 13) 00:07:27.079 10586.585 - 10636.997: 95.5804% ( 15) 00:07:27.079 10636.997 - 10687.409: 95.7334% ( 24) 00:07:27.079 10687.409 - 10737.822: 95.8929% ( 25) 00:07:27.079 10737.822 - 10788.234: 96.0714% ( 28) 00:07:27.079 10788.234 - 10838.646: 96.2309% ( 25) 00:07:27.079 10838.646 - 10889.058: 96.4286% ( 31) 00:07:27.079 10889.058 - 10939.471: 96.6645% ( 37) 00:07:27.079 10939.471 - 10989.883: 96.8559% ( 30) 00:07:27.079 10989.883 - 11040.295: 97.0217% ( 26) 00:07:27.079 11040.295 - 11090.708: 97.1365% ( 18) 00:07:27.079 11090.708 - 11141.120: 97.2449% ( 17) 00:07:27.079 11141.120 - 11191.532: 97.3342% ( 14) 00:07:27.079 11191.532 - 11241.945: 97.4298% ( 15) 00:07:27.079 11241.945 - 11292.357: 97.5255% ( 15) 00:07:27.079 11292.357 - 11342.769: 97.6212% ( 15) 00:07:27.079 11342.769 - 11393.182: 97.7551% ( 21) 00:07:27.079 11393.182 - 11443.594: 97.8699% ( 18) 00:07:27.079 11443.594 - 11494.006: 97.9337% ( 10) 00:07:27.079 11494.006 - 11544.418: 97.9911% ( 9) 00:07:27.079 11544.418 - 11594.831: 98.0357% ( 7) 00:07:27.079 11594.831 - 11645.243: 98.0676% ( 5) 00:07:27.079 11645.243 - 11695.655: 98.1122% ( 7) 00:07:27.079 11695.655 - 11746.068: 98.1569% ( 7) 00:07:27.079 11746.068 - 11796.480: 98.2015% ( 7) 00:07:27.079 11796.480 - 11846.892: 98.2270% ( 4) 00:07:27.079 11846.892 - 11897.305: 98.2462% ( 3) 00:07:27.079 11897.305 - 11947.717: 98.2653% ( 3) 00:07:27.079 11947.717 - 11998.129: 98.2908% ( 4) 00:07:27.079 11998.129 - 12048.542: 98.3418% ( 8) 00:07:27.079 12048.542 - 12098.954: 98.4056% ( 10) 00:07:27.079 12098.954 - 12149.366: 98.4885% ( 13) 00:07:27.079 12149.366 - 12199.778: 98.5395% ( 8) 00:07:27.079 12199.778 - 12250.191: 98.5969% ( 9) 00:07:27.079 12250.191 - 12300.603: 98.6671% ( 11) 00:07:27.079 12300.603 - 12351.015: 98.7372% ( 11) 00:07:27.079 12351.015 - 12401.428: 98.7946% ( 9) 00:07:27.079 12401.428 - 12451.840: 98.8584% ( 10) 00:07:27.079 12451.840 - 12502.252: 98.9732% ( 18) 00:07:27.079 12502.252 - 12552.665: 99.0051% ( 5) 00:07:27.079 12552.665 - 12603.077: 99.0370% ( 5) 00:07:27.079 12603.077 - 12653.489: 99.0689% ( 5) 00:07:27.079 12653.489 - 12703.902: 99.1071% ( 6) 00:07:27.080 12703.902 - 12754.314: 99.1518% ( 7) 00:07:27.080 12754.314 - 12804.726: 99.1837% ( 5) 00:07:27.080 13107.200 - 13208.025: 99.2028% ( 3) 00:07:27.080 13208.025 - 13308.849: 99.2347% ( 5) 00:07:27.080 13308.849 - 13409.674: 99.2602% ( 4) 00:07:27.080 13409.674 - 13510.498: 99.2857% ( 4) 00:07:27.080 13510.498 - 13611.323: 99.3112% ( 4) 00:07:27.080 13611.323 - 13712.148: 99.3431% ( 5) 00:07:27.080 13712.148 - 13812.972: 99.3686% ( 4) 00:07:27.080 13812.972 - 13913.797: 99.3941% ( 4) 00:07:27.080 13913.797 - 14014.622: 99.4196% ( 4) 00:07:27.080 14014.622 - 14115.446: 99.4515% ( 5) 00:07:27.080 14115.446 - 14216.271: 99.4770% ( 4) 00:07:27.080 14216.271 - 14317.095: 99.5026% ( 4) 00:07:27.080 14317.095 - 14417.920: 99.5281% ( 4) 00:07:27.080 14417.920 - 14518.745: 99.5536% ( 4) 00:07:27.080 14518.745 - 14619.569: 99.5791% ( 4) 00:07:27.080 14619.569 - 14720.394: 99.5918% ( 2) 00:07:27.080 18350.080 - 18450.905: 99.6046% ( 2) 00:07:27.080 18450.905 - 18551.729: 99.6301% ( 4) 00:07:27.080 18551.729 - 18652.554: 99.6492% ( 3) 00:07:27.080 18652.554 - 18753.378: 99.6747% ( 4) 00:07:27.080 18753.378 - 18854.203: 99.7003% ( 4) 00:07:27.080 18854.203 - 18955.028: 99.7321% ( 5) 00:07:27.080 18955.028 - 19055.852: 99.7577% ( 4) 00:07:27.080 19055.852 - 19156.677: 99.7832% ( 4) 00:07:27.080 19156.677 - 19257.502: 99.8087% ( 4) 00:07:27.080 19257.502 - 19358.326: 99.8406% ( 5) 00:07:27.080 19358.326 - 19459.151: 99.8661% ( 4) 00:07:27.080 19459.151 - 19559.975: 99.8916% ( 4) 00:07:27.080 19559.975 - 19660.800: 99.9171% ( 4) 00:07:27.080 19660.800 - 19761.625: 99.9426% ( 4) 00:07:27.080 19761.625 - 19862.449: 99.9681% ( 4) 00:07:27.080 19862.449 - 19963.274: 100.0000% ( 5) 00:07:27.080 00:07:27.080 13:20:15 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:27.080 00:07:27.080 real 0m2.509s 00:07:27.080 user 0m2.228s 00:07:27.080 sys 0m0.179s 00:07:27.080 13:20:15 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.080 ************************************ 00:07:27.080 END TEST nvme_perf 00:07:27.080 ************************************ 00:07:27.080 13:20:15 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.341 13:20:15 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:27.341 13:20:15 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:27.341 13:20:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.341 13:20:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.341 ************************************ 00:07:27.341 START TEST nvme_hello_world 00:07:27.341 ************************************ 00:07:27.341 13:20:15 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:27.341 Initializing NVMe Controllers 00:07:27.341 Attached to 0000:00:10.0 00:07:27.341 Namespace ID: 1 size: 6GB 00:07:27.341 Attached to 0000:00:11.0 00:07:27.341 Namespace ID: 1 size: 5GB 00:07:27.341 Attached to 0000:00:13.0 00:07:27.341 Namespace ID: 1 size: 1GB 00:07:27.341 Attached to 0000:00:12.0 00:07:27.341 Namespace ID: 1 size: 4GB 00:07:27.341 Namespace ID: 2 size: 4GB 00:07:27.341 Namespace ID: 3 size: 4GB 00:07:27.341 Initialization complete. 00:07:27.341 INFO: using host memory buffer for IO 00:07:27.341 Hello world! 00:07:27.341 INFO: using host memory buffer for IO 00:07:27.341 Hello world! 00:07:27.341 INFO: using host memory buffer for IO 00:07:27.341 Hello world! 00:07:27.341 INFO: using host memory buffer for IO 00:07:27.341 Hello world! 00:07:27.341 INFO: using host memory buffer for IO 00:07:27.341 Hello world! 00:07:27.341 INFO: using host memory buffer for IO 00:07:27.341 Hello world! 00:07:27.341 00:07:27.342 real 0m0.209s 00:07:27.342 user 0m0.083s 00:07:27.342 sys 0m0.086s 00:07:27.342 13:20:15 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.342 13:20:15 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:27.342 ************************************ 00:07:27.342 END TEST nvme_hello_world 00:07:27.342 ************************************ 00:07:27.342 13:20:15 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:27.342 13:20:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.342 13:20:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.342 13:20:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.342 ************************************ 00:07:27.342 START TEST nvme_sgl 00:07:27.342 ************************************ 00:07:27.342 13:20:15 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:27.603 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:27.603 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:27.603 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:27.603 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:27.603 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:27.603 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:27.603 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:27.603 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:27.603 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:27.603 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:27.603 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:27.603 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:27.603 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:27.603 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:27.603 NVMe Readv/Writev Request test 00:07:27.603 Attached to 0000:00:10.0 00:07:27.603 Attached to 0000:00:11.0 00:07:27.603 Attached to 0000:00:13.0 00:07:27.603 Attached to 0000:00:12.0 00:07:27.603 0000:00:10.0: build_io_request_2 test passed 00:07:27.603 0000:00:10.0: build_io_request_4 test passed 00:07:27.603 0000:00:10.0: build_io_request_5 test passed 00:07:27.603 0000:00:10.0: build_io_request_6 test passed 00:07:27.603 0000:00:10.0: build_io_request_7 test passed 00:07:27.603 0000:00:10.0: build_io_request_10 test passed 00:07:27.603 0000:00:11.0: build_io_request_2 test passed 00:07:27.603 0000:00:11.0: build_io_request_4 test passed 00:07:27.603 0000:00:11.0: build_io_request_5 test passed 00:07:27.603 0000:00:11.0: build_io_request_6 test passed 00:07:27.603 0000:00:11.0: build_io_request_7 test passed 00:07:27.603 0000:00:11.0: build_io_request_10 test passed 00:07:27.603 Cleaning up... 00:07:27.862 00:07:27.862 real 0m0.279s 00:07:27.862 user 0m0.141s 00:07:27.862 sys 0m0.091s 00:07:27.862 13:20:16 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.862 13:20:16 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:27.862 ************************************ 00:07:27.862 END TEST nvme_sgl 00:07:27.862 ************************************ 00:07:27.862 13:20:16 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:27.862 13:20:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.862 13:20:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.862 13:20:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.862 ************************************ 00:07:27.862 START TEST nvme_e2edp 00:07:27.862 ************************************ 00:07:27.862 13:20:16 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:27.862 NVMe Write/Read with End-to-End data protection test 00:07:27.862 Attached to 0000:00:10.0 00:07:27.862 Attached to 0000:00:11.0 00:07:27.862 Attached to 0000:00:13.0 00:07:27.862 Attached to 0000:00:12.0 00:07:27.862 Cleaning up... 00:07:27.862 00:07:27.862 real 0m0.215s 00:07:27.862 user 0m0.069s 00:07:27.862 sys 0m0.103s 00:07:27.862 13:20:16 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.862 ************************************ 00:07:27.862 END TEST nvme_e2edp 00:07:27.862 ************************************ 00:07:27.862 13:20:16 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:28.121 13:20:16 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:28.121 13:20:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.121 13:20:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.121 13:20:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.121 ************************************ 00:07:28.121 START TEST nvme_reserve 00:07:28.121 ************************************ 00:07:28.121 13:20:16 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:28.121 ===================================================== 00:07:28.121 NVMe Controller at PCI bus 0, device 16, function 0 00:07:28.121 ===================================================== 00:07:28.121 Reservations: Not Supported 00:07:28.121 ===================================================== 00:07:28.121 NVMe Controller at PCI bus 0, device 17, function 0 00:07:28.121 ===================================================== 00:07:28.121 Reservations: Not Supported 00:07:28.121 ===================================================== 00:07:28.121 NVMe Controller at PCI bus 0, device 19, function 0 00:07:28.121 ===================================================== 00:07:28.121 Reservations: Not Supported 00:07:28.121 ===================================================== 00:07:28.121 NVMe Controller at PCI bus 0, device 18, function 0 00:07:28.121 ===================================================== 00:07:28.121 Reservations: Not Supported 00:07:28.121 Reservation test passed 00:07:28.121 ************************************ 00:07:28.121 END TEST nvme_reserve 00:07:28.121 ************************************ 00:07:28.121 00:07:28.121 real 0m0.210s 00:07:28.121 user 0m0.068s 00:07:28.121 sys 0m0.098s 00:07:28.121 13:20:16 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.121 13:20:16 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:28.379 13:20:16 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:28.379 13:20:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.379 13:20:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.379 13:20:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.380 ************************************ 00:07:28.380 START TEST nvme_err_injection 00:07:28.380 ************************************ 00:07:28.380 13:20:16 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:28.380 NVMe Error Injection test 00:07:28.380 Attached to 0000:00:10.0 00:07:28.380 Attached to 0000:00:11.0 00:07:28.380 Attached to 0000:00:13.0 00:07:28.380 Attached to 0000:00:12.0 00:07:28.380 0000:00:10.0: get features failed as expected 00:07:28.380 0000:00:11.0: get features failed as expected 00:07:28.380 0000:00:13.0: get features failed as expected 00:07:28.380 0000:00:12.0: get features failed as expected 00:07:28.380 0000:00:10.0: get features successfully as expected 00:07:28.380 0000:00:11.0: get features successfully as expected 00:07:28.380 0000:00:13.0: get features successfully as expected 00:07:28.380 0000:00:12.0: get features successfully as expected 00:07:28.380 0000:00:10.0: read failed as expected 00:07:28.380 0000:00:11.0: read failed as expected 00:07:28.380 0000:00:13.0: read failed as expected 00:07:28.380 0000:00:12.0: read failed as expected 00:07:28.380 0000:00:10.0: read successfully as expected 00:07:28.380 0000:00:11.0: read successfully as expected 00:07:28.380 0000:00:13.0: read successfully as expected 00:07:28.380 0000:00:12.0: read successfully as expected 00:07:28.380 Cleaning up... 00:07:28.380 ************************************ 00:07:28.380 END TEST nvme_err_injection 00:07:28.380 ************************************ 00:07:28.380 00:07:28.380 real 0m0.226s 00:07:28.380 user 0m0.077s 00:07:28.380 sys 0m0.105s 00:07:28.380 13:20:16 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.380 13:20:16 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:28.638 13:20:16 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:28.638 13:20:16 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:28.638 13:20:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.638 13:20:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.638 ************************************ 00:07:28.638 START TEST nvme_overhead 00:07:28.638 ************************************ 00:07:28.638 13:20:16 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:30.013 Initializing NVMe Controllers 00:07:30.013 Attached to 0000:00:10.0 00:07:30.013 Attached to 0000:00:11.0 00:07:30.013 Attached to 0000:00:13.0 00:07:30.013 Attached to 0000:00:12.0 00:07:30.013 Initialization complete. Launching workers. 00:07:30.013 submit (in ns) avg, min, max = 11380.8, 10373.1, 72096.9 00:07:30.013 complete (in ns) avg, min, max = 7623.4, 7216.2, 302966.2 00:07:30.013 00:07:30.013 Submit histogram 00:07:30.013 ================ 00:07:30.013 Range in us Cumulative Count 00:07:30.013 10.338 - 10.388: 0.0056% ( 1) 00:07:30.013 10.683 - 10.732: 0.0112% ( 1) 00:07:30.013 10.782 - 10.831: 0.0335% ( 4) 00:07:30.013 10.831 - 10.880: 0.5870% ( 99) 00:07:30.013 10.880 - 10.929: 3.3766% ( 499) 00:07:30.013 10.929 - 10.978: 10.6105% ( 1294) 00:07:30.013 10.978 - 11.028: 22.7974% ( 2180) 00:07:30.013 11.028 - 11.077: 38.2100% ( 2757) 00:07:30.013 11.077 - 11.126: 53.9859% ( 2822) 00:07:30.013 11.126 - 11.175: 66.6704% ( 2269) 00:07:30.013 11.175 - 11.225: 75.5870% ( 1595) 00:07:30.013 11.225 - 11.274: 81.1270% ( 991) 00:07:30.013 11.274 - 11.323: 84.3079% ( 569) 00:07:30.013 11.323 - 11.372: 85.9682% ( 297) 00:07:30.013 11.372 - 11.422: 87.0416% ( 192) 00:07:30.013 11.422 - 11.471: 87.7516% ( 127) 00:07:30.013 11.471 - 11.520: 88.3050% ( 99) 00:07:30.013 11.520 - 11.569: 88.8640% ( 100) 00:07:30.013 11.569 - 11.618: 89.3225% ( 82) 00:07:30.013 11.618 - 11.668: 89.9821% ( 118) 00:07:30.013 11.668 - 11.717: 90.5691% ( 105) 00:07:30.013 11.717 - 11.766: 91.1058% ( 96) 00:07:30.013 11.766 - 11.815: 91.6536% ( 98) 00:07:30.013 11.815 - 11.865: 92.2071% ( 99) 00:07:30.013 11.865 - 11.914: 92.8108% ( 108) 00:07:30.013 11.914 - 11.963: 93.3810% ( 102) 00:07:30.013 11.963 - 12.012: 93.9624% ( 104) 00:07:30.013 12.012 - 12.062: 94.4823% ( 93) 00:07:30.013 12.062 - 12.111: 94.8401% ( 64) 00:07:30.013 12.111 - 12.160: 95.0973% ( 46) 00:07:30.013 12.160 - 12.209: 95.3377% ( 43) 00:07:30.013 12.209 - 12.258: 95.5780% ( 43) 00:07:30.013 12.258 - 12.308: 95.8128% ( 42) 00:07:30.013 12.308 - 12.357: 95.9358% ( 22) 00:07:30.013 12.357 - 12.406: 96.0309% ( 17) 00:07:30.013 12.406 - 12.455: 96.0812% ( 9) 00:07:30.013 12.455 - 12.505: 96.1259% ( 8) 00:07:30.013 12.505 - 12.554: 96.1650% ( 7) 00:07:30.013 12.554 - 12.603: 96.2209% ( 10) 00:07:30.013 12.603 - 12.702: 96.2545% ( 6) 00:07:30.013 12.702 - 12.800: 96.2768% ( 4) 00:07:30.013 12.800 - 12.898: 96.2880% ( 2) 00:07:30.013 12.898 - 12.997: 96.3048% ( 3) 00:07:30.013 12.997 - 13.095: 96.3551% ( 9) 00:07:30.013 13.095 - 13.194: 96.4445% ( 16) 00:07:30.013 13.194 - 13.292: 96.5675% ( 22) 00:07:30.013 13.292 - 13.391: 96.6737% ( 19) 00:07:30.013 13.391 - 13.489: 96.7688% ( 17) 00:07:30.013 13.489 - 13.588: 96.8470% ( 14) 00:07:30.013 13.588 - 13.686: 96.9533% ( 19) 00:07:30.013 13.686 - 13.785: 97.0595% ( 19) 00:07:30.013 13.785 - 13.883: 97.1377% ( 14) 00:07:30.013 13.883 - 13.982: 97.1992% ( 11) 00:07:30.013 13.982 - 14.080: 97.2607% ( 11) 00:07:30.013 14.080 - 14.178: 97.2999% ( 7) 00:07:30.013 14.178 - 14.277: 97.3055% ( 1) 00:07:30.013 14.277 - 14.375: 97.3725% ( 12) 00:07:30.013 14.375 - 14.474: 97.3781% ( 1) 00:07:30.013 14.474 - 14.572: 97.4117% ( 6) 00:07:30.013 14.572 - 14.671: 97.4508% ( 7) 00:07:30.013 14.671 - 14.769: 97.4620% ( 2) 00:07:30.013 14.769 - 14.868: 97.4843% ( 4) 00:07:30.013 14.868 - 14.966: 97.5067% ( 4) 00:07:30.013 14.966 - 15.065: 97.5682% ( 11) 00:07:30.013 15.065 - 15.163: 97.5794% ( 2) 00:07:30.013 15.163 - 15.262: 97.5906% ( 2) 00:07:30.013 15.262 - 15.360: 97.6073% ( 3) 00:07:30.013 15.360 - 15.458: 97.6465% ( 7) 00:07:30.013 15.458 - 15.557: 97.6632% ( 3) 00:07:30.013 15.557 - 15.655: 97.6912% ( 5) 00:07:30.013 15.655 - 15.754: 97.7080% ( 3) 00:07:30.013 15.754 - 15.852: 97.7247% ( 3) 00:07:30.013 15.951 - 16.049: 97.7527% ( 5) 00:07:30.013 16.049 - 16.148: 97.7639% ( 2) 00:07:30.013 16.148 - 16.246: 97.7750% ( 2) 00:07:30.013 16.246 - 16.345: 97.7918% ( 3) 00:07:30.013 16.345 - 16.443: 97.8421% ( 9) 00:07:30.013 16.443 - 16.542: 97.9875% ( 26) 00:07:30.013 16.542 - 16.640: 98.2055% ( 39) 00:07:30.013 16.640 - 16.738: 98.3453% ( 25) 00:07:30.013 16.738 - 16.837: 98.4403% ( 17) 00:07:30.013 16.837 - 16.935: 98.5130% ( 13) 00:07:30.013 16.935 - 17.034: 98.6024% ( 16) 00:07:30.013 17.034 - 17.132: 98.6863% ( 15) 00:07:30.013 17.132 - 17.231: 98.7366% ( 9) 00:07:30.013 17.231 - 17.329: 98.7925% ( 10) 00:07:30.013 17.329 - 17.428: 98.8931% ( 18) 00:07:30.013 17.428 - 17.526: 98.9714% ( 14) 00:07:30.013 17.526 - 17.625: 99.0217% ( 9) 00:07:30.013 17.625 - 17.723: 99.0664% ( 8) 00:07:30.013 17.723 - 17.822: 99.1391% ( 13) 00:07:30.013 17.822 - 17.920: 99.1782% ( 7) 00:07:30.013 17.920 - 18.018: 99.2509% ( 13) 00:07:30.013 18.018 - 18.117: 99.2900% ( 7) 00:07:30.013 18.117 - 18.215: 99.3068% ( 3) 00:07:30.013 18.215 - 18.314: 99.3627% ( 10) 00:07:30.013 18.314 - 18.412: 99.4074% ( 8) 00:07:30.013 18.412 - 18.511: 99.4577% ( 9) 00:07:30.013 18.511 - 18.609: 99.5081% ( 9) 00:07:30.013 18.609 - 18.708: 99.5472% ( 7) 00:07:30.013 18.708 - 18.806: 99.5640% ( 3) 00:07:30.013 18.806 - 18.905: 99.6031% ( 7) 00:07:30.013 18.905 - 19.003: 99.6254% ( 4) 00:07:30.013 19.003 - 19.102: 99.6534% ( 5) 00:07:30.013 19.102 - 19.200: 99.6758% ( 4) 00:07:30.013 19.200 - 19.298: 99.6869% ( 2) 00:07:30.013 19.298 - 19.397: 99.7037% ( 3) 00:07:30.013 19.397 - 19.495: 99.7093% ( 1) 00:07:30.013 19.692 - 19.791: 99.7149% ( 1) 00:07:30.013 19.791 - 19.889: 99.7261% ( 2) 00:07:30.013 19.889 - 19.988: 99.7317% ( 1) 00:07:30.013 19.988 - 20.086: 99.7428% ( 2) 00:07:30.013 20.086 - 20.185: 99.7484% ( 1) 00:07:30.013 20.185 - 20.283: 99.7540% ( 1) 00:07:30.013 20.382 - 20.480: 99.7596% ( 1) 00:07:30.013 20.578 - 20.677: 99.7708% ( 2) 00:07:30.013 20.677 - 20.775: 99.7820% ( 2) 00:07:30.013 20.775 - 20.874: 99.7932% ( 2) 00:07:30.013 20.874 - 20.972: 99.8043% ( 2) 00:07:30.013 20.972 - 21.071: 99.8155% ( 2) 00:07:30.013 21.366 - 21.465: 99.8211% ( 1) 00:07:30.013 21.465 - 21.563: 99.8267% ( 1) 00:07:30.013 21.662 - 21.760: 99.8323% ( 1) 00:07:30.013 21.760 - 21.858: 99.8379% ( 1) 00:07:30.013 22.055 - 22.154: 99.8435% ( 1) 00:07:30.013 22.154 - 22.252: 99.8491% ( 1) 00:07:30.013 22.548 - 22.646: 99.8547% ( 1) 00:07:30.013 22.745 - 22.843: 99.8602% ( 1) 00:07:30.013 22.843 - 22.942: 99.8658% ( 1) 00:07:30.013 23.335 - 23.434: 99.8714% ( 1) 00:07:30.013 23.434 - 23.532: 99.8770% ( 1) 00:07:30.013 23.828 - 23.926: 99.8826% ( 1) 00:07:30.013 24.025 - 24.123: 99.8882% ( 1) 00:07:30.013 25.009 - 25.108: 99.8938% ( 1) 00:07:30.013 25.600 - 25.797: 99.8994% ( 1) 00:07:30.013 25.797 - 25.994: 99.9106% ( 2) 00:07:30.013 26.191 - 26.388: 99.9217% ( 2) 00:07:30.013 26.782 - 26.978: 99.9273% ( 1) 00:07:30.013 26.978 - 27.175: 99.9329% ( 1) 00:07:30.013 27.372 - 27.569: 99.9385% ( 1) 00:07:30.013 27.766 - 27.963: 99.9441% ( 1) 00:07:30.013 28.948 - 29.145: 99.9497% ( 1) 00:07:30.013 32.295 - 32.492: 99.9553% ( 1) 00:07:30.013 38.006 - 38.203: 99.9609% ( 1) 00:07:30.013 38.794 - 38.991: 99.9665% ( 1) 00:07:30.013 39.975 - 40.172: 99.9720% ( 1) 00:07:30.013 47.262 - 47.458: 99.9776% ( 1) 00:07:30.013 48.049 - 48.246: 99.9832% ( 1) 00:07:30.013 55.138 - 55.532: 99.9888% ( 1) 00:07:30.013 66.560 - 66.954: 99.9944% ( 1) 00:07:30.013 72.074 - 72.468: 100.0000% ( 1) 00:07:30.013 00:07:30.013 Complete histogram 00:07:30.013 ================== 00:07:30.013 Range in us Cumulative Count 00:07:30.013 7.188 - 7.237: 0.0224% ( 4) 00:07:30.013 7.237 - 7.286: 2.0517% ( 363) 00:07:30.013 7.286 - 7.335: 15.1778% ( 2348) 00:07:30.013 7.335 - 7.385: 40.3399% ( 4501) 00:07:30.013 7.385 - 7.434: 63.3386% ( 4114) 00:07:30.014 7.434 - 7.483: 78.2536% ( 2668) 00:07:30.014 7.483 - 7.532: 86.8012% ( 1529) 00:07:30.014 7.532 - 7.582: 91.0387% ( 758) 00:07:30.014 7.582 - 7.631: 93.1574% ( 379) 00:07:30.014 7.631 - 7.680: 94.1469% ( 177) 00:07:30.014 7.680 - 7.729: 94.6892% ( 97) 00:07:30.014 7.729 - 7.778: 95.0637% ( 67) 00:07:30.014 7.778 - 7.828: 95.1811% ( 21) 00:07:30.014 7.828 - 7.877: 95.2706% ( 16) 00:07:30.014 7.877 - 7.926: 95.3936% ( 22) 00:07:30.014 7.926 - 7.975: 95.4439% ( 9) 00:07:30.014 7.975 - 8.025: 95.5221% ( 14) 00:07:30.014 8.025 - 8.074: 95.6563% ( 24) 00:07:30.014 8.074 - 8.123: 95.8240% ( 30) 00:07:30.014 8.123 - 8.172: 96.1259% ( 54) 00:07:30.014 8.172 - 8.222: 96.3886% ( 47) 00:07:30.014 8.222 - 8.271: 96.6123% ( 40) 00:07:30.014 8.271 - 8.320: 96.7520% ( 25) 00:07:30.014 8.320 - 8.369: 96.8694% ( 21) 00:07:30.014 8.369 - 8.418: 96.9030% ( 6) 00:07:30.014 8.418 - 8.468: 96.9477% ( 8) 00:07:30.014 8.468 - 8.517: 97.0036% ( 10) 00:07:30.014 8.517 - 8.566: 97.0148% ( 2) 00:07:30.014 8.566 - 8.615: 97.0203% ( 1) 00:07:30.014 8.615 - 8.665: 97.0315% ( 2) 00:07:30.014 8.665 - 8.714: 97.0427% ( 2) 00:07:30.014 8.763 - 8.812: 97.0483% ( 1) 00:07:30.014 8.812 - 8.862: 97.0539% ( 1) 00:07:30.014 8.862 - 8.911: 97.0595% ( 1) 00:07:30.014 9.009 - 9.058: 97.0651% ( 1) 00:07:30.014 9.058 - 9.108: 97.0707% ( 1) 00:07:30.014 9.108 - 9.157: 97.0763% ( 1) 00:07:30.014 9.157 - 9.206: 97.0818% ( 1) 00:07:30.014 9.206 - 9.255: 97.0874% ( 1) 00:07:30.014 9.255 - 9.305: 97.1042% ( 3) 00:07:30.014 9.305 - 9.354: 97.1489% ( 8) 00:07:30.014 9.354 - 9.403: 97.2160% ( 12) 00:07:30.014 9.403 - 9.452: 97.2496% ( 6) 00:07:30.014 9.452 - 9.502: 97.2775% ( 5) 00:07:30.014 9.502 - 9.551: 97.2999% ( 4) 00:07:30.014 9.551 - 9.600: 97.3222% ( 4) 00:07:30.014 9.600 - 9.649: 97.3390% ( 3) 00:07:30.014 9.649 - 9.698: 97.3446% ( 1) 00:07:30.014 9.748 - 9.797: 97.3614% ( 3) 00:07:30.014 9.797 - 9.846: 97.3781% ( 3) 00:07:30.014 9.846 - 9.895: 97.3837% ( 1) 00:07:30.014 9.895 - 9.945: 97.3949% ( 2) 00:07:30.014 9.945 - 9.994: 97.4005% ( 1) 00:07:30.014 9.994 - 10.043: 97.4061% ( 1) 00:07:30.014 10.043 - 10.092: 97.4284% ( 4) 00:07:30.014 10.092 - 10.142: 97.4508% ( 4) 00:07:30.014 10.191 - 10.240: 97.4676% ( 3) 00:07:30.014 10.338 - 10.388: 97.4788% ( 2) 00:07:30.014 10.437 - 10.486: 97.4899% ( 2) 00:07:30.014 10.585 - 10.634: 97.5011% ( 2) 00:07:30.014 10.683 - 10.732: 97.5067% ( 1) 00:07:30.014 10.732 - 10.782: 97.5123% ( 1) 00:07:30.014 10.782 - 10.831: 97.5291% ( 3) 00:07:30.014 10.831 - 10.880: 97.5347% ( 1) 00:07:30.014 10.929 - 10.978: 97.5403% ( 1) 00:07:30.014 11.274 - 11.323: 97.5514% ( 2) 00:07:30.014 11.323 - 11.372: 97.6073% ( 10) 00:07:30.014 11.372 - 11.422: 97.6912% ( 15) 00:07:30.014 11.422 - 11.471: 97.8701% ( 32) 00:07:30.014 11.471 - 11.520: 98.0266% ( 28) 00:07:30.014 11.520 - 11.569: 98.1608% ( 24) 00:07:30.014 11.569 - 11.618: 98.2502% ( 16) 00:07:30.014 11.618 - 11.668: 98.3117% ( 11) 00:07:30.014 11.668 - 11.717: 98.3285% ( 3) 00:07:30.014 11.717 - 11.766: 98.3564% ( 5) 00:07:30.014 11.766 - 11.815: 98.3676% ( 2) 00:07:30.014 11.815 - 11.865: 98.3844% ( 3) 00:07:30.014 11.914 - 11.963: 98.3900% ( 1) 00:07:30.014 12.111 - 12.160: 98.3956% ( 1) 00:07:30.014 12.258 - 12.308: 98.4012% ( 1) 00:07:30.014 12.308 - 12.357: 98.4068% ( 1) 00:07:30.014 12.554 - 12.603: 98.4123% ( 1) 00:07:30.014 12.702 - 12.800: 98.4235% ( 2) 00:07:30.014 12.898 - 12.997: 98.4459% ( 4) 00:07:30.014 12.997 - 13.095: 98.4738% ( 5) 00:07:30.014 13.095 - 13.194: 98.5018% ( 5) 00:07:30.014 13.194 - 13.292: 98.5577% ( 10) 00:07:30.014 13.292 - 13.391: 98.5912% ( 6) 00:07:30.014 13.391 - 13.489: 98.6136% ( 4) 00:07:30.014 13.489 - 13.588: 98.6415% ( 5) 00:07:30.014 13.588 - 13.686: 98.6863% ( 8) 00:07:30.014 13.686 - 13.785: 98.7534% ( 12) 00:07:30.014 13.785 - 13.883: 98.8428% ( 16) 00:07:30.014 13.883 - 13.982: 98.9322% ( 16) 00:07:30.014 13.982 - 14.080: 99.0217% ( 16) 00:07:30.014 14.080 - 14.178: 99.1000% ( 14) 00:07:30.014 14.178 - 14.277: 99.1559% ( 10) 00:07:30.014 14.277 - 14.375: 99.2174% ( 11) 00:07:30.014 14.375 - 14.474: 99.2844% ( 12) 00:07:30.014 14.474 - 14.572: 99.3627% ( 14) 00:07:30.014 14.572 - 14.671: 99.4354% ( 13) 00:07:30.014 14.671 - 14.769: 99.4633% ( 5) 00:07:30.014 14.769 - 14.868: 99.5192% ( 10) 00:07:30.014 14.868 - 14.966: 99.5528% ( 6) 00:07:30.014 14.966 - 15.065: 99.5919% ( 7) 00:07:30.014 15.065 - 15.163: 99.6366% ( 8) 00:07:30.014 15.163 - 15.262: 99.6590% ( 4) 00:07:30.014 15.262 - 15.360: 99.6702% ( 2) 00:07:30.014 15.360 - 15.458: 99.6925% ( 4) 00:07:30.014 15.458 - 15.557: 99.7205% ( 5) 00:07:30.014 15.557 - 15.655: 99.7317% ( 2) 00:07:30.014 15.655 - 15.754: 99.7540% ( 4) 00:07:30.014 15.852 - 15.951: 99.7652% ( 2) 00:07:30.014 15.951 - 16.049: 99.7708% ( 1) 00:07:30.014 16.049 - 16.148: 99.7764% ( 1) 00:07:30.014 16.148 - 16.246: 99.7820% ( 1) 00:07:30.014 16.246 - 16.345: 99.7876% ( 1) 00:07:30.014 16.345 - 16.443: 99.7932% ( 1) 00:07:30.014 16.443 - 16.542: 99.7987% ( 1) 00:07:30.014 16.542 - 16.640: 99.8099% ( 2) 00:07:30.014 16.640 - 16.738: 99.8155% ( 1) 00:07:30.014 16.837 - 16.935: 99.8211% ( 1) 00:07:30.014 16.935 - 17.034: 99.8267% ( 1) 00:07:30.014 17.034 - 17.132: 99.8435% ( 3) 00:07:30.014 17.231 - 17.329: 99.8491% ( 1) 00:07:30.014 17.526 - 17.625: 99.8658% ( 3) 00:07:30.014 17.723 - 17.822: 99.8714% ( 1) 00:07:30.014 17.822 - 17.920: 99.8770% ( 1) 00:07:30.014 18.215 - 18.314: 99.8826% ( 1) 00:07:30.014 18.511 - 18.609: 99.8938% ( 2) 00:07:30.014 18.609 - 18.708: 99.9050% ( 2) 00:07:30.014 19.102 - 19.200: 99.9106% ( 1) 00:07:30.014 19.495 - 19.594: 99.9161% ( 1) 00:07:30.014 19.889 - 19.988: 99.9217% ( 1) 00:07:30.014 20.086 - 20.185: 99.9273% ( 1) 00:07:30.014 21.268 - 21.366: 99.9329% ( 1) 00:07:30.014 22.055 - 22.154: 99.9385% ( 1) 00:07:30.014 22.449 - 22.548: 99.9441% ( 1) 00:07:30.014 23.631 - 23.729: 99.9497% ( 1) 00:07:30.014 25.797 - 25.994: 99.9553% ( 1) 00:07:30.014 25.994 - 26.191: 99.9609% ( 1) 00:07:30.014 30.917 - 31.114: 99.9665% ( 1) 00:07:30.014 47.262 - 47.458: 99.9720% ( 1) 00:07:30.014 61.440 - 61.834: 99.9776% ( 1) 00:07:30.014 63.409 - 63.803: 99.9832% ( 1) 00:07:30.014 76.800 - 77.194: 99.9888% ( 1) 00:07:30.014 100.431 - 100.825: 99.9944% ( 1) 00:07:30.014 302.474 - 304.049: 100.0000% ( 1) 00:07:30.014 00:07:30.014 ************************************ 00:07:30.014 END TEST nvme_overhead 00:07:30.014 ************************************ 00:07:30.014 00:07:30.014 real 0m1.220s 00:07:30.014 user 0m1.076s 00:07:30.014 sys 0m0.097s 00:07:30.014 13:20:18 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.014 13:20:18 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:30.014 13:20:18 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:30.014 13:20:18 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:30.014 13:20:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.014 13:20:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.014 ************************************ 00:07:30.014 START TEST nvme_arbitration 00:07:30.014 ************************************ 00:07:30.014 13:20:18 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:33.313 Initializing NVMe Controllers 00:07:33.313 Attached to 0000:00:10.0 00:07:33.313 Attached to 0000:00:11.0 00:07:33.313 Attached to 0000:00:13.0 00:07:33.313 Attached to 0000:00:12.0 00:07:33.313 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:33.313 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:33.313 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:33.313 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:33.313 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:33.313 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:33.313 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:33.313 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:33.313 Initialization complete. Launching workers. 00:07:33.313 Starting thread on core 1 with urgent priority queue 00:07:33.313 Starting thread on core 2 with urgent priority queue 00:07:33.313 Starting thread on core 3 with urgent priority queue 00:07:33.313 Starting thread on core 0 with urgent priority queue 00:07:33.313 QEMU NVMe Ctrl (12340 ) core 0: 789.33 IO/s 126.69 secs/100000 ios 00:07:33.313 QEMU NVMe Ctrl (12342 ) core 0: 789.33 IO/s 126.69 secs/100000 ios 00:07:33.313 QEMU NVMe Ctrl (12341 ) core 1: 789.33 IO/s 126.69 secs/100000 ios 00:07:33.313 QEMU NVMe Ctrl (12342 ) core 1: 789.33 IO/s 126.69 secs/100000 ios 00:07:33.313 QEMU NVMe Ctrl (12343 ) core 2: 853.33 IO/s 117.19 secs/100000 ios 00:07:33.313 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:07:33.313 ======================================================== 00:07:33.313 00:07:33.313 ************************************ 00:07:33.313 END TEST nvme_arbitration 00:07:33.313 ************************************ 00:07:33.313 00:07:33.313 real 0m3.350s 00:07:33.313 user 0m9.278s 00:07:33.313 sys 0m0.127s 00:07:33.313 13:20:21 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.313 13:20:21 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:33.313 13:20:21 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:33.313 13:20:21 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:33.313 13:20:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.313 13:20:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.313 ************************************ 00:07:33.313 START TEST nvme_single_aen 00:07:33.313 ************************************ 00:07:33.313 13:20:21 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:33.313 Asynchronous Event Request test 00:07:33.313 Attached to 0000:00:10.0 00:07:33.313 Attached to 0000:00:11.0 00:07:33.313 Attached to 0000:00:13.0 00:07:33.313 Attached to 0000:00:12.0 00:07:33.313 Reset controller to setup AER completions for this process 00:07:33.313 Registering asynchronous event callbacks... 00:07:33.313 Getting orig temperature thresholds of all controllers 00:07:33.313 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:33.313 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:33.313 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:33.313 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:33.313 Setting all controllers temperature threshold low to trigger AER 00:07:33.313 Waiting for all controllers temperature threshold to be set lower 00:07:33.313 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:33.313 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:33.313 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:33.313 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:33.313 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:33.313 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:33.313 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:33.313 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:33.313 Waiting for all controllers to trigger AER and reset threshold 00:07:33.313 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.313 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.313 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.313 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:33.313 Cleaning up... 00:07:33.313 00:07:33.313 real 0m0.223s 00:07:33.313 user 0m0.082s 00:07:33.313 sys 0m0.108s 00:07:33.313 ************************************ 00:07:33.313 END TEST nvme_single_aen 00:07:33.313 ************************************ 00:07:33.313 13:20:21 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.313 13:20:21 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:33.576 13:20:21 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:33.576 13:20:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.576 13:20:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.576 13:20:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.576 ************************************ 00:07:33.576 START TEST nvme_doorbell_aers 00:07:33.576 ************************************ 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:33.576 13:20:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:33.837 [2024-11-26 13:20:22.189230] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:07:43.806 Executing: test_write_invalid_db 00:07:43.806 Waiting for AER completion... 00:07:43.806 Failure: test_write_invalid_db 00:07:43.806 00:07:43.806 Executing: test_invalid_db_write_overflow_sq 00:07:43.806 Waiting for AER completion... 00:07:43.806 Failure: test_invalid_db_write_overflow_sq 00:07:43.806 00:07:43.806 Executing: test_invalid_db_write_overflow_cq 00:07:43.806 Waiting for AER completion... 00:07:43.806 Failure: test_invalid_db_write_overflow_cq 00:07:43.806 00:07:43.806 13:20:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:43.806 13:20:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:43.806 [2024-11-26 13:20:32.210873] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:07:53.774 Executing: test_write_invalid_db 00:07:53.774 Waiting for AER completion... 00:07:53.774 Failure: test_write_invalid_db 00:07:53.774 00:07:53.774 Executing: test_invalid_db_write_overflow_sq 00:07:53.774 Waiting for AER completion... 00:07:53.774 Failure: test_invalid_db_write_overflow_sq 00:07:53.774 00:07:53.774 Executing: test_invalid_db_write_overflow_cq 00:07:53.774 Waiting for AER completion... 00:07:53.774 Failure: test_invalid_db_write_overflow_cq 00:07:53.774 00:07:53.774 13:20:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:53.774 13:20:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:53.774 [2024-11-26 13:20:42.246783] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:03.837 Executing: test_write_invalid_db 00:08:03.837 Waiting for AER completion... 00:08:03.837 Failure: test_write_invalid_db 00:08:03.837 00:08:03.837 Executing: test_invalid_db_write_overflow_sq 00:08:03.837 Waiting for AER completion... 00:08:03.837 Failure: test_invalid_db_write_overflow_sq 00:08:03.837 00:08:03.837 Executing: test_invalid_db_write_overflow_cq 00:08:03.837 Waiting for AER completion... 00:08:03.837 Failure: test_invalid_db_write_overflow_cq 00:08:03.837 00:08:03.837 13:20:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:03.837 13:20:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:03.837 [2024-11-26 13:20:52.283218] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 Executing: test_write_invalid_db 00:08:13.818 Waiting for AER completion... 00:08:13.818 Failure: test_write_invalid_db 00:08:13.818 00:08:13.818 Executing: test_invalid_db_write_overflow_sq 00:08:13.818 Waiting for AER completion... 00:08:13.818 Failure: test_invalid_db_write_overflow_sq 00:08:13.818 00:08:13.818 Executing: test_invalid_db_write_overflow_cq 00:08:13.818 Waiting for AER completion... 00:08:13.818 Failure: test_invalid_db_write_overflow_cq 00:08:13.818 00:08:13.818 ************************************ 00:08:13.818 END TEST nvme_doorbell_aers 00:08:13.818 ************************************ 00:08:13.818 00:08:13.818 real 0m40.200s 00:08:13.818 user 0m34.146s 00:08:13.818 sys 0m5.667s 00:08:13.818 13:21:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.818 13:21:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:13.818 13:21:02 nvme -- nvme/nvme.sh@97 -- # uname 00:08:13.818 13:21:02 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:13.818 13:21:02 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:13.818 13:21:02 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:13.818 13:21:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.818 13:21:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.818 ************************************ 00:08:13.818 START TEST nvme_multi_aen 00:08:13.818 ************************************ 00:08:13.818 13:21:02 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:13.818 [2024-11-26 13:21:02.335636] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.335840] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.335854] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.337377] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.337409] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.337418] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.338484] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.338509] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.338517] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.340873] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.341221] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 [2024-11-26 13:21:02.341489] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63148) is not found. Dropping the request. 00:08:13.818 Child process pid: 63669 00:08:14.076 [Child] Asynchronous Event Request test 00:08:14.076 [Child] Attached to 0000:00:10.0 00:08:14.076 [Child] Attached to 0000:00:11.0 00:08:14.076 [Child] Attached to 0000:00:13.0 00:08:14.076 [Child] Attached to 0000:00:12.0 00:08:14.076 [Child] Registering asynchronous event callbacks... 00:08:14.076 [Child] Getting orig temperature thresholds of all controllers 00:08:14.076 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.076 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.076 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.076 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.076 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:14.076 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.076 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.076 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.076 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.076 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.076 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.077 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.077 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.077 [Child] Cleaning up... 00:08:14.077 Asynchronous Event Request test 00:08:14.077 Attached to 0000:00:10.0 00:08:14.077 Attached to 0000:00:11.0 00:08:14.077 Attached to 0000:00:13.0 00:08:14.077 Attached to 0000:00:12.0 00:08:14.077 Reset controller to setup AER completions for this process 00:08:14.077 Registering asynchronous event callbacks... 00:08:14.077 Getting orig temperature thresholds of all controllers 00:08:14.077 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.077 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.077 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.077 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.077 Setting all controllers temperature threshold low to trigger AER 00:08:14.077 Waiting for all controllers temperature threshold to be set lower 00:08:14.077 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.077 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:14.077 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.077 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:14.077 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.077 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:14.077 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.077 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:14.077 Waiting for all controllers to trigger AER and reset threshold 00:08:14.077 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.077 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.077 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.077 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.077 Cleaning up... 00:08:14.077 00:08:14.077 real 0m0.449s 00:08:14.077 user 0m0.134s 00:08:14.077 sys 0m0.197s 00:08:14.077 13:21:02 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.077 13:21:02 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:14.077 ************************************ 00:08:14.077 END TEST nvme_multi_aen 00:08:14.077 ************************************ 00:08:14.077 13:21:02 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:14.077 13:21:02 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:14.077 13:21:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.077 13:21:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.077 ************************************ 00:08:14.077 START TEST nvme_startup 00:08:14.077 ************************************ 00:08:14.077 13:21:02 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:14.335 Initializing NVMe Controllers 00:08:14.335 Attached to 0000:00:10.0 00:08:14.335 Attached to 0000:00:11.0 00:08:14.335 Attached to 0000:00:13.0 00:08:14.335 Attached to 0000:00:12.0 00:08:14.335 Initialization complete. 00:08:14.335 Time used:150964.562 (us). 00:08:14.335 00:08:14.335 real 0m0.217s 00:08:14.335 user 0m0.073s 00:08:14.335 sys 0m0.097s 00:08:14.335 13:21:02 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.335 13:21:02 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:14.335 ************************************ 00:08:14.335 END TEST nvme_startup 00:08:14.335 ************************************ 00:08:14.335 13:21:02 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:14.335 13:21:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.335 13:21:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.335 13:21:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.335 ************************************ 00:08:14.335 START TEST nvme_multi_secondary 00:08:14.335 ************************************ 00:08:14.335 13:21:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:14.335 13:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63725 00:08:14.335 13:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63726 00:08:14.335 13:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:14.335 13:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:14.335 13:21:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:17.617 Initializing NVMe Controllers 00:08:17.617 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:17.617 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:17.617 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:17.617 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:17.617 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:17.617 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:17.618 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:17.618 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:17.618 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:17.618 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:17.618 Initialization complete. Launching workers. 00:08:17.618 ======================================================== 00:08:17.618 Latency(us) 00:08:17.618 Device Information : IOPS MiB/s Average min max 00:08:17.618 PCIE (0000:00:10.0) NSID 1 from core 1: 5418.27 21.17 2951.55 774.06 9686.99 00:08:17.618 PCIE (0000:00:11.0) NSID 1 from core 1: 5418.27 21.17 2954.95 807.34 10658.05 00:08:17.618 PCIE (0000:00:13.0) NSID 1 from core 1: 5418.27 21.17 2955.43 808.99 10029.01 00:08:17.618 PCIE (0000:00:12.0) NSID 1 from core 1: 5418.27 21.17 2955.45 805.48 10768.34 00:08:17.618 PCIE (0000:00:12.0) NSID 2 from core 1: 5418.27 21.17 2955.76 811.50 10270.83 00:08:17.618 PCIE (0000:00:12.0) NSID 3 from core 1: 5418.27 21.17 2956.03 823.84 10982.04 00:08:17.618 ======================================================== 00:08:17.618 Total : 32509.60 126.99 2954.86 774.06 10982.04 00:08:17.618 00:08:17.876 Initializing NVMe Controllers 00:08:17.876 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:17.876 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:17.876 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:17.876 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:17.876 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:17.876 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:17.876 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:17.876 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:17.876 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:17.876 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:17.876 Initialization complete. Launching workers. 00:08:17.876 ======================================================== 00:08:17.876 Latency(us) 00:08:17.876 Device Information : IOPS MiB/s Average min max 00:08:17.876 PCIE (0000:00:10.0) NSID 1 from core 2: 2095.26 8.18 7634.97 1438.93 23273.34 00:08:17.876 PCIE (0000:00:11.0) NSID 1 from core 2: 2095.26 8.18 7635.84 1343.36 24356.19 00:08:17.876 PCIE (0000:00:13.0) NSID 1 from core 2: 2095.26 8.18 7636.37 1337.42 25628.45 00:08:17.876 PCIE (0000:00:12.0) NSID 1 from core 2: 2095.26 8.18 7637.13 1526.51 20820.73 00:08:17.876 PCIE (0000:00:12.0) NSID 2 from core 2: 2095.26 8.18 7637.31 1388.33 25720.99 00:08:17.876 PCIE (0000:00:12.0) NSID 3 from core 2: 2095.26 8.18 7638.03 1386.10 27298.98 00:08:17.876 ======================================================== 00:08:17.876 Total : 12571.57 49.11 7636.61 1337.42 27298.98 00:08:17.876 00:08:17.876 13:21:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63725 00:08:19.776 Initializing NVMe Controllers 00:08:19.776 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:19.776 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:19.776 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:19.776 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:19.776 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:19.776 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:19.776 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:19.776 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:19.776 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:19.776 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:19.776 Initialization complete. Launching workers. 00:08:19.776 ======================================================== 00:08:19.776 Latency(us) 00:08:19.776 Device Information : IOPS MiB/s Average min max 00:08:19.776 PCIE (0000:00:10.0) NSID 1 from core 0: 7359.98 28.75 2172.55 795.97 9749.91 00:08:19.776 PCIE (0000:00:11.0) NSID 1 from core 0: 7359.98 28.75 2173.46 824.18 9635.77 00:08:19.776 PCIE (0000:00:13.0) NSID 1 from core 0: 7359.98 28.75 2173.40 732.70 9154.62 00:08:19.776 PCIE (0000:00:12.0) NSID 1 from core 0: 7359.98 28.75 2173.37 689.79 9401.67 00:08:19.776 PCIE (0000:00:12.0) NSID 2 from core 0: 7359.58 28.75 2173.45 670.71 9280.56 00:08:19.776 PCIE (0000:00:12.0) NSID 3 from core 0: 7359.98 28.75 2173.29 634.19 9590.91 00:08:19.776 ======================================================== 00:08:19.776 Total : 44159.48 172.50 2173.25 634.19 9749.91 00:08:19.776 00:08:19.776 13:21:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63726 00:08:19.776 13:21:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63795 00:08:19.776 13:21:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:19.776 13:21:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63796 00:08:19.776 13:21:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:19.776 13:21:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:23.059 Initializing NVMe Controllers 00:08:23.059 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.059 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.059 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.059 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.059 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:23.059 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:23.059 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:23.059 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:23.059 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:23.059 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:23.059 Initialization complete. Launching workers. 00:08:23.059 ======================================================== 00:08:23.059 Latency(us) 00:08:23.059 Device Information : IOPS MiB/s Average min max 00:08:23.059 PCIE (0000:00:10.0) NSID 1 from core 1: 3418.01 13.35 4679.31 1097.43 10716.13 00:08:23.059 PCIE (0000:00:11.0) NSID 1 from core 1: 3418.01 13.35 4680.98 1148.90 10930.30 00:08:23.059 PCIE (0000:00:13.0) NSID 1 from core 1: 3418.01 13.35 4681.75 1036.79 11577.85 00:08:23.059 PCIE (0000:00:12.0) NSID 1 from core 1: 3418.01 13.35 4681.77 1167.02 10560.53 00:08:23.059 PCIE (0000:00:12.0) NSID 2 from core 1: 3418.01 13.35 4681.75 1163.50 11686.44 00:08:23.059 PCIE (0000:00:12.0) NSID 3 from core 1: 3423.35 13.37 4675.29 1165.67 10980.09 00:08:23.059 ======================================================== 00:08:23.059 Total : 20513.42 80.13 4680.14 1036.79 11686.44 00:08:23.059 00:08:23.059 Initializing NVMe Controllers 00:08:23.059 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.059 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.059 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.059 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.059 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:23.059 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:23.059 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:23.059 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:23.059 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:23.059 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:23.059 Initialization complete. Launching workers. 00:08:23.059 ======================================================== 00:08:23.059 Latency(us) 00:08:23.059 Device Information : IOPS MiB/s Average min max 00:08:23.059 PCIE (0000:00:10.0) NSID 1 from core 0: 3772.70 14.74 4239.25 1122.46 14925.30 00:08:23.059 PCIE (0000:00:11.0) NSID 1 from core 0: 3772.70 14.74 4240.41 1043.19 13871.53 00:08:23.059 PCIE (0000:00:13.0) NSID 1 from core 0: 3772.70 14.74 4240.31 1086.99 16381.77 00:08:23.059 PCIE (0000:00:12.0) NSID 1 from core 0: 3772.70 14.74 4242.03 947.97 15943.25 00:08:23.059 PCIE (0000:00:12.0) NSID 2 from core 0: 3772.70 14.74 4241.94 947.97 15843.98 00:08:23.059 PCIE (0000:00:12.0) NSID 3 from core 0: 3778.03 14.76 4235.88 973.26 15578.90 00:08:23.059 ======================================================== 00:08:23.059 Total : 22641.54 88.44 4239.97 947.97 16381.77 00:08:23.059 00:08:25.594 Initializing NVMe Controllers 00:08:25.594 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:25.594 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:25.594 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.594 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:25.594 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:25.594 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:25.594 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:25.594 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:25.594 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:25.594 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:25.594 Initialization complete. Launching workers. 00:08:25.594 ======================================================== 00:08:25.594 Latency(us) 00:08:25.594 Device Information : IOPS MiB/s Average min max 00:08:25.594 PCIE (0000:00:10.0) NSID 1 from core 2: 1825.52 7.13 8763.36 1067.01 27446.77 00:08:25.594 PCIE (0000:00:11.0) NSID 1 from core 2: 1825.52 7.13 8764.50 1179.43 31639.16 00:08:25.594 PCIE (0000:00:13.0) NSID 1 from core 2: 1825.52 7.13 8763.98 1101.66 28089.14 00:08:25.594 PCIE (0000:00:12.0) NSID 1 from core 2: 1825.52 7.13 8764.24 1089.62 27812.27 00:08:25.594 PCIE (0000:00:12.0) NSID 2 from core 2: 1825.52 7.13 8764.07 1102.11 34426.37 00:08:25.594 PCIE (0000:00:12.0) NSID 3 from core 2: 1825.52 7.13 8763.48 1155.86 30114.53 00:08:25.594 ======================================================== 00:08:25.594 Total : 10953.15 42.79 8763.94 1067.01 34426.37 00:08:25.594 00:08:25.594 ************************************ 00:08:25.594 END TEST nvme_multi_secondary 00:08:25.594 ************************************ 00:08:25.594 13:21:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63795 00:08:25.594 13:21:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63796 00:08:25.594 00:08:25.594 real 0m10.927s 00:08:25.594 user 0m18.398s 00:08:25.594 sys 0m0.695s 00:08:25.594 13:21:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.594 13:21:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:25.594 13:21:13 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:25.594 13:21:13 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:25.594 13:21:13 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62758 ]] 00:08:25.594 13:21:13 nvme -- common/autotest_common.sh@1094 -- # kill 62758 00:08:25.594 13:21:13 nvme -- common/autotest_common.sh@1095 -- # wait 62758 00:08:25.595 [2024-11-26 13:21:13.865077] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.865526] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.865584] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.865613] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.869830] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.869944] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.869979] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.870010] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.873646] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.873725] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.873752] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.873779] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.877517] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.877602] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.877628] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 [2024-11-26 13:21:13.877656] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63668) is not found. Dropping the request. 00:08:25.595 13:21:14 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:25.595 13:21:14 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:25.595 13:21:14 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:25.595 13:21:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.595 13:21:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.595 13:21:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.595 ************************************ 00:08:25.595 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:25.595 ************************************ 00:08:25.595 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:25.595 * Looking for test storage... 00:08:25.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:25.595 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.595 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.595 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:25.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.857 --rc genhtml_branch_coverage=1 00:08:25.857 --rc genhtml_function_coverage=1 00:08:25.857 --rc genhtml_legend=1 00:08:25.857 --rc geninfo_all_blocks=1 00:08:25.857 --rc geninfo_unexecuted_blocks=1 00:08:25.857 00:08:25.857 ' 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:25.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.857 --rc genhtml_branch_coverage=1 00:08:25.857 --rc genhtml_function_coverage=1 00:08:25.857 --rc genhtml_legend=1 00:08:25.857 --rc geninfo_all_blocks=1 00:08:25.857 --rc geninfo_unexecuted_blocks=1 00:08:25.857 00:08:25.857 ' 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:25.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.857 --rc genhtml_branch_coverage=1 00:08:25.857 --rc genhtml_function_coverage=1 00:08:25.857 --rc genhtml_legend=1 00:08:25.857 --rc geninfo_all_blocks=1 00:08:25.857 --rc geninfo_unexecuted_blocks=1 00:08:25.857 00:08:25.857 ' 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:25.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.857 --rc genhtml_branch_coverage=1 00:08:25.857 --rc genhtml_function_coverage=1 00:08:25.857 --rc genhtml_legend=1 00:08:25.857 --rc geninfo_all_blocks=1 00:08:25.857 --rc geninfo_unexecuted_blocks=1 00:08:25.857 00:08:25.857 ' 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:25.857 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:25.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63963 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63963 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63963 ']' 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.858 13:21:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:25.858 [2024-11-26 13:21:14.342795] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:08:25.858 [2024-11-26 13:21:14.343695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63963 ] 00:08:26.119 [2024-11-26 13:21:14.523877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.119 [2024-11-26 13:21:14.650792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.119 [2024-11-26 13:21:14.651094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.119 [2024-11-26 13:21:14.651482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.119 [2024-11-26 13:21:14.651607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:27.062 nvme0n1 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_pcNTQ.txt 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:27.062 true 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732627275 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63986 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:27.062 13:21:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:28.976 [2024-11-26 13:21:17.411384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:28.976 [2024-11-26 13:21:17.413570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:28.976 [2024-11-26 13:21:17.413613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:28.976 [2024-11-26 13:21:17.413627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:28.976 [2024-11-26 13:21:17.417117] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:28.976 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63986 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63986 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63986 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_pcNTQ.txt 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_pcNTQ.txt 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63963 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63963 ']' 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63963 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63963 00:08:28.976 killing process with pid 63963 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63963' 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63963 00:08:28.976 13:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63963 00:08:30.876 13:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:30.876 13:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:30.876 ************************************ 00:08:30.876 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:30.876 ************************************ 00:08:30.876 00:08:30.876 real 0m4.933s 00:08:30.876 user 0m17.327s 00:08:30.876 sys 0m0.589s 00:08:30.876 13:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.876 13:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:30.876 13:21:18 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:30.876 13:21:18 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:30.876 13:21:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.876 13:21:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.876 13:21:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.876 ************************************ 00:08:30.876 START TEST nvme_fio 00:08:30.876 ************************************ 00:08:30.876 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:30.876 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:30.876 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:30.876 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:30.876 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:30.876 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:30.876 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:30.876 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:30.876 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:30.876 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:30.876 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:30.876 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:30.876 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:30.876 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:30.876 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:30.876 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:30.876 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:30.876 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:31.134 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:31.134 13:21:19 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:31.134 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:31.134 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:31.134 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:31.134 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:31.135 13:21:19 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:31.135 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:31.135 fio-3.35 00:08:31.135 Starting 1 thread 00:08:37.790 00:08:37.790 test: (groupid=0, jobs=1): err= 0: pid=64123: Tue Nov 26 13:21:25 2024 00:08:37.790 read: IOPS=22.7k, BW=88.6MiB/s (92.9MB/s)(177MiB/2001msec) 00:08:37.790 slat (nsec): min=3397, max=88709, avg=5010.74, stdev=2308.37 00:08:37.790 clat (usec): min=227, max=8380, avg=2808.85, stdev=945.19 00:08:37.790 lat (usec): min=231, max=8390, avg=2813.86, stdev=946.46 00:08:37.790 clat percentiles (usec): 00:08:37.790 | 1.00th=[ 1844], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2343], 00:08:37.790 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2540], 00:08:37.790 | 70.00th=[ 2671], 80.00th=[ 2933], 90.00th=[ 4113], 95.00th=[ 5211], 00:08:37.790 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7701], 99.95th=[ 7898], 00:08:37.790 | 99.99th=[ 8160] 00:08:37.790 bw ( KiB/s): min=86248, max=95968, per=99.89%, avg=90586.67, stdev=4943.17, samples=3 00:08:37.790 iops : min=21562, max=23992, avg=22646.67, stdev=1235.79, samples=3 00:08:37.790 write: IOPS=22.5k, BW=88.0MiB/s (92.3MB/s)(176MiB/2001msec); 0 zone resets 00:08:37.790 slat (nsec): min=3484, max=60892, avg=5204.22, stdev=2324.88 00:08:37.790 clat (usec): min=271, max=8293, avg=2830.45, stdev=963.14 00:08:37.790 lat (usec): min=276, max=8303, avg=2835.66, stdev=964.37 00:08:37.790 clat percentiles (usec): 00:08:37.790 | 1.00th=[ 1844], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2343], 00:08:37.790 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2573], 00:08:37.790 | 70.00th=[ 2704], 80.00th=[ 2966], 90.00th=[ 4178], 95.00th=[ 5276], 00:08:37.790 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7767], 99.95th=[ 8029], 00:08:37.790 | 99.99th=[ 8160] 00:08:37.790 bw ( KiB/s): min=88160, max=95720, per=100.00%, avg=90816.00, stdev=4251.89, samples=3 00:08:37.790 iops : min=22040, max=23930, avg=22704.00, stdev=1062.97, samples=3 00:08:37.790 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:08:37.790 lat (msec) : 2=2.21%, 4=87.12%, 10=10.63% 00:08:37.790 cpu : usr=99.20%, sys=0.05%, ctx=18, majf=0, minf=606 00:08:37.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:37.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:37.790 issued rwts: total=45364,45098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:37.790 00:08:37.790 Run status group 0 (all jobs): 00:08:37.790 READ: bw=88.6MiB/s (92.9MB/s), 88.6MiB/s-88.6MiB/s (92.9MB/s-92.9MB/s), io=177MiB (186MB), run=2001-2001msec 00:08:37.790 WRITE: bw=88.0MiB/s (92.3MB/s), 88.0MiB/s-88.0MiB/s (92.3MB/s-92.3MB/s), io=176MiB (185MB), run=2001-2001msec 00:08:37.790 ----------------------------------------------------- 00:08:37.790 Suppressions used: 00:08:37.790 count bytes template 00:08:37.790 1 32 /usr/src/fio/parse.c 00:08:37.790 1 8 libtcmalloc_minimal.so 00:08:37.790 ----------------------------------------------------- 00:08:37.790 00:08:37.790 13:21:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:37.790 13:21:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:37.790 13:21:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:37.790 13:21:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:37.790 13:21:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:37.790 13:21:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:37.790 13:21:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:37.790 13:21:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:37.790 13:21:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:37.790 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:37.790 fio-3.35 00:08:37.790 Starting 1 thread 00:08:43.081 00:08:43.081 test: (groupid=0, jobs=1): err= 0: pid=64188: Tue Nov 26 13:21:31 2024 00:08:43.081 read: IOPS=16.4k, BW=64.2MiB/s (67.4MB/s)(129MiB/2001msec) 00:08:43.081 slat (nsec): min=3953, max=89051, avg=6473.86, stdev=3619.28 00:08:43.081 clat (usec): min=917, max=13889, avg=3852.68, stdev=1350.78 00:08:43.081 lat (usec): min=925, max=13935, avg=3859.16, stdev=1352.21 00:08:43.081 clat percentiles (usec): 00:08:43.081 | 1.00th=[ 2147], 5.00th=[ 2606], 10.00th=[ 2737], 20.00th=[ 2900], 00:08:43.081 | 30.00th=[ 3032], 40.00th=[ 3163], 50.00th=[ 3294], 60.00th=[ 3556], 00:08:43.081 | 70.00th=[ 4015], 80.00th=[ 4883], 90.00th=[ 5932], 95.00th=[ 6652], 00:08:43.081 | 99.00th=[ 8291], 99.50th=[ 9110], 99.90th=[10290], 99.95th=[10683], 00:08:43.081 | 99.99th=[13698] 00:08:43.081 bw ( KiB/s): min=59616, max=70920, per=100.00%, avg=65826.67, stdev=5734.23, samples=3 00:08:43.081 iops : min=14904, max=17730, avg=16456.67, stdev=1433.56, samples=3 00:08:43.081 write: IOPS=16.5k, BW=64.4MiB/s (67.5MB/s)(129MiB/2001msec); 0 zone resets 00:08:43.081 slat (nsec): min=4123, max=88658, avg=6643.57, stdev=3565.88 00:08:43.081 clat (usec): min=942, max=13815, avg=3893.45, stdev=1354.69 00:08:43.081 lat (usec): min=949, max=13830, avg=3900.09, stdev=1356.09 00:08:43.081 clat percentiles (usec): 00:08:43.081 | 1.00th=[ 2147], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2933], 00:08:43.081 | 30.00th=[ 3064], 40.00th=[ 3195], 50.00th=[ 3359], 60.00th=[ 3589], 00:08:43.081 | 70.00th=[ 4080], 80.00th=[ 4948], 90.00th=[ 5932], 95.00th=[ 6652], 00:08:43.081 | 99.00th=[ 8225], 99.50th=[ 9110], 99.90th=[10159], 99.95th=[10552], 00:08:43.081 | 99.99th=[13566] 00:08:43.081 bw ( KiB/s): min=59888, max=70816, per=99.55%, avg=65645.33, stdev=5487.57, samples=3 00:08:43.081 iops : min=14972, max=17704, avg=16411.33, stdev=1371.89, samples=3 00:08:43.081 lat (usec) : 1000=0.01% 00:08:43.081 lat (msec) : 2=0.62%, 4=68.85%, 10=30.40%, 20=0.12% 00:08:43.081 cpu : usr=98.70%, sys=0.15%, ctx=4, majf=0, minf=606 00:08:43.081 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:43.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:43.081 issued rwts: total=32911,32989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:43.081 00:08:43.081 Run status group 0 (all jobs): 00:08:43.081 READ: bw=64.2MiB/s (67.4MB/s), 64.2MiB/s-64.2MiB/s (67.4MB/s-67.4MB/s), io=129MiB (135MB), run=2001-2001msec 00:08:43.081 WRITE: bw=64.4MiB/s (67.5MB/s), 64.4MiB/s-64.4MiB/s (67.5MB/s-67.5MB/s), io=129MiB (135MB), run=2001-2001msec 00:08:43.081 ----------------------------------------------------- 00:08:43.081 Suppressions used: 00:08:43.081 count bytes template 00:08:43.081 1 32 /usr/src/fio/parse.c 00:08:43.081 1 8 libtcmalloc_minimal.so 00:08:43.081 ----------------------------------------------------- 00:08:43.081 00:08:43.081 13:21:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:43.081 13:21:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:43.081 13:21:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:43.081 13:21:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:43.343 13:21:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:43.343 13:21:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:43.343 13:21:31 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:43.343 13:21:31 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.343 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:43.603 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:43.603 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:43.603 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:43.603 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:43.603 13:21:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:43.603 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:43.603 fio-3.35 00:08:43.603 Starting 1 thread 00:08:48.900 00:08:48.900 test: (groupid=0, jobs=1): err= 0: pid=64249: Tue Nov 26 13:21:37 2024 00:08:48.900 read: IOPS=16.1k, BW=62.8MiB/s (65.8MB/s)(126MiB/2001msec) 00:08:48.900 slat (usec): min=4, max=105, avg= 6.53, stdev= 3.70 00:08:48.900 clat (usec): min=243, max=10737, avg=3943.81, stdev=1324.67 00:08:48.900 lat (usec): min=249, max=10793, avg=3950.34, stdev=1325.90 00:08:48.900 clat percentiles (usec): 00:08:48.900 | 1.00th=[ 2245], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2966], 00:08:48.900 | 30.00th=[ 3064], 40.00th=[ 3195], 50.00th=[ 3359], 60.00th=[ 3654], 00:08:48.900 | 70.00th=[ 4228], 80.00th=[ 5145], 90.00th=[ 5997], 95.00th=[ 6652], 00:08:48.900 | 99.00th=[ 7898], 99.50th=[ 8455], 99.90th=[ 9110], 99.95th=[ 9503], 00:08:48.900 | 99.99th=[10683] 00:08:48.900 bw ( KiB/s): min=63792, max=65224, per=100.00%, avg=64493.33, stdev=716.45, samples=3 00:08:48.900 iops : min=15948, max=16306, avg=16123.33, stdev=179.11, samples=3 00:08:48.900 write: IOPS=16.1k, BW=62.9MiB/s (65.9MB/s)(126MiB/2001msec); 0 zone resets 00:08:48.900 slat (nsec): min=4999, max=84146, avg=6692.12, stdev=3531.61 00:08:48.900 clat (usec): min=476, max=10647, avg=3981.21, stdev=1338.11 00:08:48.900 lat (usec): min=482, max=10662, avg=3987.90, stdev=1339.23 00:08:48.900 clat percentiles (usec): 00:08:48.900 | 1.00th=[ 2212], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2999], 00:08:48.900 | 30.00th=[ 3097], 40.00th=[ 3228], 50.00th=[ 3392], 60.00th=[ 3687], 00:08:48.900 | 70.00th=[ 4293], 80.00th=[ 5211], 90.00th=[ 6063], 95.00th=[ 6718], 00:08:48.900 | 99.00th=[ 7898], 99.50th=[ 8455], 99.90th=[ 9110], 99.95th=[ 9372], 00:08:48.900 | 99.99th=[10159] 00:08:48.900 bw ( KiB/s): min=62984, max=65616, per=99.78%, avg=64237.33, stdev=1320.47, samples=3 00:08:48.900 iops : min=15746, max=16404, avg=16059.33, stdev=330.12, samples=3 00:08:48.900 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:08:48.900 lat (msec) : 2=0.50%, 4=65.67%, 10=33.77%, 20=0.02% 00:08:48.900 cpu : usr=98.60%, sys=0.10%, ctx=16, majf=0, minf=606 00:08:48.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:48.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.900 issued rwts: total=32153,32206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.900 00:08:48.900 Run status group 0 (all jobs): 00:08:48.900 READ: bw=62.8MiB/s (65.8MB/s), 62.8MiB/s-62.8MiB/s (65.8MB/s-65.8MB/s), io=126MiB (132MB), run=2001-2001msec 00:08:48.900 WRITE: bw=62.9MiB/s (65.9MB/s), 62.9MiB/s-62.9MiB/s (65.9MB/s-65.9MB/s), io=126MiB (132MB), run=2001-2001msec 00:08:49.161 ----------------------------------------------------- 00:08:49.161 Suppressions used: 00:08:49.161 count bytes template 00:08:49.161 1 32 /usr/src/fio/parse.c 00:08:49.161 1 8 libtcmalloc_minimal.so 00:08:49.161 ----------------------------------------------------- 00:08:49.161 00:08:49.161 13:21:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:49.161 13:21:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:49.161 13:21:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:49.161 13:21:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:49.423 13:21:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:49.423 13:21:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:49.684 13:21:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:49.684 13:21:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:49.684 13:21:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:49.684 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:49.684 fio-3.35 00:08:49.684 Starting 1 thread 00:08:57.826 00:08:57.826 test: (groupid=0, jobs=1): err= 0: pid=64310: Tue Nov 26 13:21:45 2024 00:08:57.826 read: IOPS=16.0k, BW=62.4MiB/s (65.4MB/s)(125MiB/2001msec) 00:08:57.827 slat (usec): min=4, max=103, avg= 6.43, stdev= 3.63 00:08:57.827 clat (usec): min=230, max=12439, avg=3970.49, stdev=1335.99 00:08:57.827 lat (usec): min=236, max=12496, avg=3976.92, stdev=1337.19 00:08:57.827 clat percentiles (usec): 00:08:57.827 | 1.00th=[ 2180], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2966], 00:08:57.827 | 30.00th=[ 3097], 40.00th=[ 3228], 50.00th=[ 3392], 60.00th=[ 3687], 00:08:57.827 | 70.00th=[ 4293], 80.00th=[ 5145], 90.00th=[ 6063], 95.00th=[ 6718], 00:08:57.827 | 99.00th=[ 7963], 99.50th=[ 8586], 99.90th=[ 9503], 99.95th=[ 9896], 00:08:57.827 | 99.99th=[11994] 00:08:57.827 bw ( KiB/s): min=58160, max=69293, per=98.78%, avg=63103.00, stdev=5670.29, samples=3 00:08:57.827 iops : min=14540, max=17323, avg=15775.67, stdev=1417.44, samples=3 00:08:57.827 write: IOPS=16.0k, BW=62.5MiB/s (65.6MB/s)(125MiB/2001msec); 0 zone resets 00:08:57.827 slat (usec): min=4, max=578, avg= 6.60, stdev= 4.67 00:08:57.827 clat (usec): min=272, max=12192, avg=4005.14, stdev=1336.96 00:08:57.827 lat (usec): min=278, max=12205, avg=4011.74, stdev=1338.06 00:08:57.827 clat percentiles (usec): 00:08:57.827 | 1.00th=[ 2212], 5.00th=[ 2737], 10.00th=[ 2868], 20.00th=[ 2999], 00:08:57.827 | 30.00th=[ 3130], 40.00th=[ 3261], 50.00th=[ 3458], 60.00th=[ 3720], 00:08:57.827 | 70.00th=[ 4359], 80.00th=[ 5145], 90.00th=[ 6128], 95.00th=[ 6783], 00:08:57.827 | 99.00th=[ 7963], 99.50th=[ 8586], 99.90th=[ 9372], 99.95th=[ 9896], 00:08:57.827 | 99.99th=[11731] 00:08:57.827 bw ( KiB/s): min=57216, max=68958, per=98.08%, avg=62802.00, stdev=5891.72, samples=3 00:08:57.827 iops : min=14304, max=17239, avg=15700.33, stdev=1472.67, samples=3 00:08:57.827 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:08:57.827 lat (msec) : 2=0.64%, 4=64.67%, 10=34.59%, 20=0.04% 00:08:57.827 cpu : usr=98.70%, sys=0.00%, ctx=34, majf=0, minf=604 00:08:57.827 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:57.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.827 issued rwts: total=31958,32032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.827 00:08:57.827 Run status group 0 (all jobs): 00:08:57.827 READ: bw=62.4MiB/s (65.4MB/s), 62.4MiB/s-62.4MiB/s (65.4MB/s-65.4MB/s), io=125MiB (131MB), run=2001-2001msec 00:08:57.827 WRITE: bw=62.5MiB/s (65.6MB/s), 62.5MiB/s-62.5MiB/s (65.6MB/s-65.6MB/s), io=125MiB (131MB), run=2001-2001msec 00:08:57.827 ----------------------------------------------------- 00:08:57.827 Suppressions used: 00:08:57.827 count bytes template 00:08:57.827 1 32 /usr/src/fio/parse.c 00:08:57.827 1 8 libtcmalloc_minimal.so 00:08:57.827 ----------------------------------------------------- 00:08:57.827 00:08:57.827 13:21:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:57.827 13:21:45 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:57.827 00:08:57.827 real 0m26.701s 00:08:57.827 user 0m16.154s 00:08:57.827 sys 0m18.943s 00:08:57.827 13:21:45 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.827 13:21:45 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:57.827 ************************************ 00:08:57.827 END TEST nvme_fio 00:08:57.827 ************************************ 00:08:57.827 ************************************ 00:08:57.827 END TEST nvme 00:08:57.827 ************************************ 00:08:57.827 00:08:57.827 real 1m36.188s 00:08:57.827 user 3m37.720s 00:08:57.827 sys 0m29.446s 00:08:57.827 13:21:45 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.827 13:21:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.827 13:21:45 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:57.827 13:21:45 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:57.827 13:21:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.827 13:21:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.827 13:21:45 -- common/autotest_common.sh@10 -- # set +x 00:08:57.827 ************************************ 00:08:57.827 START TEST nvme_scc 00:08:57.827 ************************************ 00:08:57.827 13:21:45 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:57.827 * Looking for test storage... 00:08:57.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:57.827 13:21:45 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:57.827 13:21:45 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:57.827 13:21:45 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:57.827 13:21:45 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:57.827 13:21:45 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.827 13:21:45 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:57.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.827 --rc genhtml_branch_coverage=1 00:08:57.827 --rc genhtml_function_coverage=1 00:08:57.827 --rc genhtml_legend=1 00:08:57.827 --rc geninfo_all_blocks=1 00:08:57.827 --rc geninfo_unexecuted_blocks=1 00:08:57.827 00:08:57.827 ' 00:08:57.827 13:21:45 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:57.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.827 --rc genhtml_branch_coverage=1 00:08:57.827 --rc genhtml_function_coverage=1 00:08:57.827 --rc genhtml_legend=1 00:08:57.827 --rc geninfo_all_blocks=1 00:08:57.827 --rc geninfo_unexecuted_blocks=1 00:08:57.827 00:08:57.827 ' 00:08:57.827 13:21:45 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:57.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.827 --rc genhtml_branch_coverage=1 00:08:57.827 --rc genhtml_function_coverage=1 00:08:57.827 --rc genhtml_legend=1 00:08:57.827 --rc geninfo_all_blocks=1 00:08:57.827 --rc geninfo_unexecuted_blocks=1 00:08:57.827 00:08:57.827 ' 00:08:57.827 13:21:45 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:57.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.827 --rc genhtml_branch_coverage=1 00:08:57.827 --rc genhtml_function_coverage=1 00:08:57.827 --rc genhtml_legend=1 00:08:57.827 --rc geninfo_all_blocks=1 00:08:57.827 --rc geninfo_unexecuted_blocks=1 00:08:57.827 00:08:57.827 ' 00:08:57.827 13:21:45 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:57.827 13:21:45 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:57.827 13:21:45 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:57.827 13:21:45 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:57.827 13:21:45 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.827 13:21:45 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.827 13:21:45 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.827 13:21:45 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.828 13:21:45 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.828 13:21:45 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:57.828 13:21:45 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.828 13:21:45 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:57.828 13:21:45 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:57.828 13:21:45 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:57.828 13:21:45 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:57.828 13:21:45 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:57.828 13:21:45 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:57.828 13:21:45 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:57.828 13:21:45 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:57.828 13:21:45 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:57.828 13:21:45 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.828 13:21:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:57.828 13:21:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:57.828 13:21:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:57.828 13:21:45 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:57.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.089 Waiting for block devices as requested 00:08:58.089 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.089 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.089 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.350 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:03.651 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:03.651 13:21:51 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:03.651 13:21:51 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:03.651 13:21:51 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:03.651 13:21:51 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:03.651 13:21:51 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:03.651 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:03.652 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:03.653 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.654 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.655 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:03.656 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:03.657 13:21:51 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:03.657 13:21:51 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:03.657 13:21:51 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:03.657 13:21:51 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:03.657 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.658 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:03.659 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:03.660 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.661 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.662 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:51 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:03.663 13:21:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:03.664 13:21:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:03.664 13:21:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:03.664 13:21:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:03.664 13:21:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:03.664 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.665 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:03.666 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.667 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:03.668 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.669 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.670 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.671 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:03.672 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.673 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.674 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.675 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:03.676 13:21:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:03.676 13:21:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:03.676 13:21:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:03.676 13:21:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.676 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:03.677 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:03.678 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:03.679 13:21:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:03.679 13:21:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:03.941 13:21:52 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:03.941 13:21:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:03.941 13:21:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:03.941 13:21:52 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:04.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:04.776 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:04.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:04.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:04.776 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:05.038 13:21:53 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:05.038 13:21:53 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:05.038 13:21:53 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.038 13:21:53 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:05.038 ************************************ 00:09:05.038 START TEST nvme_simple_copy 00:09:05.038 ************************************ 00:09:05.038 13:21:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:05.300 Initializing NVMe Controllers 00:09:05.300 Attaching to 0000:00:10.0 00:09:05.300 Controller supports SCC. Attached to 0000:00:10.0 00:09:05.301 Namespace ID: 1 size: 6GB 00:09:05.301 Initialization complete. 00:09:05.301 00:09:05.301 Controller QEMU NVMe Ctrl (12340 ) 00:09:05.301 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:05.301 Namespace Block Size:4096 00:09:05.301 Writing LBAs 0 to 63 with Random Data 00:09:05.301 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:05.301 LBAs matching Written Data: 64 00:09:05.301 00:09:05.301 real 0m0.289s 00:09:05.301 user 0m0.106s 00:09:05.301 sys 0m0.079s 00:09:05.301 13:21:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.301 ************************************ 00:09:05.301 END TEST nvme_simple_copy 00:09:05.301 ************************************ 00:09:05.301 13:21:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:05.301 00:09:05.301 real 0m7.912s 00:09:05.301 user 0m1.156s 00:09:05.301 sys 0m1.481s 00:09:05.301 13:21:53 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.301 13:21:53 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:05.301 ************************************ 00:09:05.301 END TEST nvme_scc 00:09:05.301 ************************************ 00:09:05.301 13:21:53 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:05.301 13:21:53 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:05.301 13:21:53 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:05.301 13:21:53 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:05.301 13:21:53 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:05.301 13:21:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.301 13:21:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.301 13:21:53 -- common/autotest_common.sh@10 -- # set +x 00:09:05.301 ************************************ 00:09:05.301 START TEST nvme_fdp 00:09:05.301 ************************************ 00:09:05.301 13:21:53 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:05.301 * Looking for test storage... 00:09:05.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:05.301 13:21:53 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.301 13:21:53 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.301 13:21:53 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.563 13:21:53 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.563 13:21:53 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:05.563 13:21:53 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.563 13:21:53 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.563 --rc genhtml_branch_coverage=1 00:09:05.563 --rc genhtml_function_coverage=1 00:09:05.563 --rc genhtml_legend=1 00:09:05.563 --rc geninfo_all_blocks=1 00:09:05.563 --rc geninfo_unexecuted_blocks=1 00:09:05.563 00:09:05.563 ' 00:09:05.563 13:21:53 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.563 --rc genhtml_branch_coverage=1 00:09:05.563 --rc genhtml_function_coverage=1 00:09:05.563 --rc genhtml_legend=1 00:09:05.563 --rc geninfo_all_blocks=1 00:09:05.563 --rc geninfo_unexecuted_blocks=1 00:09:05.563 00:09:05.563 ' 00:09:05.563 13:21:53 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.563 --rc genhtml_branch_coverage=1 00:09:05.563 --rc genhtml_function_coverage=1 00:09:05.563 --rc genhtml_legend=1 00:09:05.563 --rc geninfo_all_blocks=1 00:09:05.563 --rc geninfo_unexecuted_blocks=1 00:09:05.563 00:09:05.563 ' 00:09:05.563 13:21:53 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.564 --rc genhtml_branch_coverage=1 00:09:05.564 --rc genhtml_function_coverage=1 00:09:05.564 --rc genhtml_legend=1 00:09:05.564 --rc geninfo_all_blocks=1 00:09:05.564 --rc geninfo_unexecuted_blocks=1 00:09:05.564 00:09:05.564 ' 00:09:05.564 13:21:53 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.564 13:21:53 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.564 13:21:53 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.564 13:21:53 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.564 13:21:53 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.564 13:21:53 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.564 13:21:53 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.564 13:21:53 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.564 13:21:53 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:05.564 13:21:53 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:05.564 13:21:53 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:05.564 13:21:53 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.564 13:21:53 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:05.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:06.086 Waiting for block devices as requested 00:09:06.086 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.086 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.086 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.347 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:11.656 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:11.656 13:21:59 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:11.656 13:21:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:11.656 13:21:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:11.656 13:21:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:11.656 13:21:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.656 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.657 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:11.658 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.659 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.660 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.661 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:11.662 13:21:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:11.662 13:21:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:11.662 13:21:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:11.663 13:21:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:11.663 13:21:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:11.663 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:11.664 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.665 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.666 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.667 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.668 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:11.669 13:21:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:11.669 13:21:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:11.669 13:21:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:11.669 13:21:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.669 13:21:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.669 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.670 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:11.671 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.672 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:11.673 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.674 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:11.675 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.676 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:11.677 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.678 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:11.679 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.680 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:11.681 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.943 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:11.944 13:22:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:11.944 13:22:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:11.944 13:22:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:11.944 13:22:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.944 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.945 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.946 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:11.947 13:22:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:11.947 13:22:00 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:11.947 13:22:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:11.947 13:22:00 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:11.947 13:22:00 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:12.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:12.779 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.041 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.041 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.041 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:13.041 13:22:01 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:13.041 13:22:01 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:13.041 13:22:01 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.041 13:22:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:13.041 ************************************ 00:09:13.041 START TEST nvme_flexible_data_placement 00:09:13.041 ************************************ 00:09:13.041 13:22:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:13.302 Initializing NVMe Controllers 00:09:13.302 Attaching to 0000:00:13.0 00:09:13.302 Controller supports FDP Attached to 0000:00:13.0 00:09:13.302 Namespace ID: 1 Endurance Group ID: 1 00:09:13.302 Initialization complete. 00:09:13.302 00:09:13.302 ================================== 00:09:13.302 == FDP tests for Namespace: #01 == 00:09:13.302 ================================== 00:09:13.302 00:09:13.302 Get Feature: FDP: 00:09:13.302 ================= 00:09:13.302 Enabled: Yes 00:09:13.302 FDP configuration Index: 0 00:09:13.302 00:09:13.302 FDP configurations log page 00:09:13.302 =========================== 00:09:13.302 Number of FDP configurations: 1 00:09:13.302 Version: 0 00:09:13.302 Size: 112 00:09:13.302 FDP Configuration Descriptor: 0 00:09:13.302 Descriptor Size: 96 00:09:13.302 Reclaim Group Identifier format: 2 00:09:13.302 FDP Volatile Write Cache: Not Present 00:09:13.302 FDP Configuration: Valid 00:09:13.302 Vendor Specific Size: 0 00:09:13.302 Number of Reclaim Groups: 2 00:09:13.302 Number of Recalim Unit Handles: 8 00:09:13.302 Max Placement Identifiers: 128 00:09:13.302 Number of Namespaces Suppprted: 256 00:09:13.302 Reclaim unit Nominal Size: 6000000 bytes 00:09:13.302 Estimated Reclaim Unit Time Limit: Not Reported 00:09:13.302 RUH Desc #000: RUH Type: Initially Isolated 00:09:13.302 RUH Desc #001: RUH Type: Initially Isolated 00:09:13.302 RUH Desc #002: RUH Type: Initially Isolated 00:09:13.302 RUH Desc #003: RUH Type: Initially Isolated 00:09:13.302 RUH Desc #004: RUH Type: Initially Isolated 00:09:13.302 RUH Desc #005: RUH Type: Initially Isolated 00:09:13.302 RUH Desc #006: RUH Type: Initially Isolated 00:09:13.302 RUH Desc #007: RUH Type: Initially Isolated 00:09:13.302 00:09:13.302 FDP reclaim unit handle usage log page 00:09:13.302 ====================================== 00:09:13.302 Number of Reclaim Unit Handles: 8 00:09:13.302 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:13.302 RUH Usage Desc #001: RUH Attributes: Unused 00:09:13.302 RUH Usage Desc #002: RUH Attributes: Unused 00:09:13.302 RUH Usage Desc #003: RUH Attributes: Unused 00:09:13.302 RUH Usage Desc #004: RUH Attributes: Unused 00:09:13.302 RUH Usage Desc #005: RUH Attributes: Unused 00:09:13.302 RUH Usage Desc #006: RUH Attributes: Unused 00:09:13.302 RUH Usage Desc #007: RUH Attributes: Unused 00:09:13.302 00:09:13.302 FDP statistics log page 00:09:13.302 ======================= 00:09:13.302 Host bytes with metadata written: 1024503808 00:09:13.302 Media bytes with metadata written: 1024671744 00:09:13.302 Media bytes erased: 0 00:09:13.302 00:09:13.302 FDP Reclaim unit handle status 00:09:13.302 ============================== 00:09:13.302 Number of RUHS descriptors: 2 00:09:13.302 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004ef5 00:09:13.302 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:13.302 00:09:13.302 FDP write on placement id: 0 success 00:09:13.302 00:09:13.303 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:13.303 00:09:13.303 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:13.303 00:09:13.303 Get Feature: FDP Events for Placement handle: #0 00:09:13.303 ======================== 00:09:13.303 Number of FDP Events: 6 00:09:13.303 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:13.303 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:13.303 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:13.303 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:13.303 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:13.303 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:13.303 00:09:13.303 FDP events log page 00:09:13.303 =================== 00:09:13.303 Number of FDP events: 1 00:09:13.303 FDP Event #0: 00:09:13.303 Event Type: RU Not Written to Capacity 00:09:13.303 Placement Identifier: Valid 00:09:13.303 NSID: Valid 00:09:13.303 Location: Valid 00:09:13.303 Placement Identifier: 0 00:09:13.303 Event Timestamp: 12 00:09:13.303 Namespace Identifier: 1 00:09:13.303 Reclaim Group Identifier: 0 00:09:13.303 Reclaim Unit Handle Identifier: 0 00:09:13.303 00:09:13.303 FDP test passed 00:09:13.303 00:09:13.303 real 0m0.259s 00:09:13.303 user 0m0.089s 00:09:13.303 sys 0m0.068s 00:09:13.303 13:22:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.303 ************************************ 00:09:13.303 END TEST nvme_flexible_data_placement 00:09:13.303 ************************************ 00:09:13.303 13:22:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:13.303 ************************************ 00:09:13.303 END TEST nvme_fdp 00:09:13.303 ************************************ 00:09:13.303 00:09:13.303 real 0m8.027s 00:09:13.303 user 0m1.165s 00:09:13.303 sys 0m1.525s 00:09:13.303 13:22:01 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.303 13:22:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:13.303 13:22:01 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:13.303 13:22:01 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:13.303 13:22:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.303 13:22:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.303 13:22:01 -- common/autotest_common.sh@10 -- # set +x 00:09:13.564 ************************************ 00:09:13.564 START TEST nvme_rpc 00:09:13.564 ************************************ 00:09:13.564 13:22:01 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:13.564 * Looking for test storage... 00:09:13.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:13.564 13:22:01 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:13.564 13:22:01 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:13.564 13:22:01 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:13.564 13:22:02 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.564 13:22:02 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:13.564 13:22:02 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.564 13:22:02 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:13.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.564 --rc genhtml_branch_coverage=1 00:09:13.564 --rc genhtml_function_coverage=1 00:09:13.564 --rc genhtml_legend=1 00:09:13.564 --rc geninfo_all_blocks=1 00:09:13.564 --rc geninfo_unexecuted_blocks=1 00:09:13.564 00:09:13.564 ' 00:09:13.564 13:22:02 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:13.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.564 --rc genhtml_branch_coverage=1 00:09:13.564 --rc genhtml_function_coverage=1 00:09:13.564 --rc genhtml_legend=1 00:09:13.564 --rc geninfo_all_blocks=1 00:09:13.564 --rc geninfo_unexecuted_blocks=1 00:09:13.564 00:09:13.564 ' 00:09:13.564 13:22:02 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:13.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.564 --rc genhtml_branch_coverage=1 00:09:13.564 --rc genhtml_function_coverage=1 00:09:13.564 --rc genhtml_legend=1 00:09:13.564 --rc geninfo_all_blocks=1 00:09:13.564 --rc geninfo_unexecuted_blocks=1 00:09:13.564 00:09:13.564 ' 00:09:13.564 13:22:02 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:13.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.564 --rc genhtml_branch_coverage=1 00:09:13.565 --rc genhtml_function_coverage=1 00:09:13.565 --rc genhtml_legend=1 00:09:13.565 --rc geninfo_all_blocks=1 00:09:13.565 --rc geninfo_unexecuted_blocks=1 00:09:13.565 00:09:13.565 ' 00:09:13.565 13:22:02 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.565 13:22:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:13.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.565 13:22:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:13.565 13:22:02 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65691 00:09:13.565 13:22:02 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:13.565 13:22:02 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:13.565 13:22:02 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65691 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65691 ']' 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.565 13:22:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.826 [2024-11-26 13:22:02.170706] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:09:13.826 [2024-11-26 13:22:02.171116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65691 ] 00:09:13.826 [2024-11-26 13:22:02.335824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.087 [2024-11-26 13:22:02.460136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.087 [2024-11-26 13:22:02.460231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.659 13:22:03 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.659 13:22:03 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:14.659 13:22:03 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:14.921 Nvme0n1 00:09:14.921 13:22:03 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:14.921 13:22:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:15.182 request: 00:09:15.182 { 00:09:15.182 "bdev_name": "Nvme0n1", 00:09:15.182 "filename": "non_existing_file", 00:09:15.182 "method": "bdev_nvme_apply_firmware", 00:09:15.182 "req_id": 1 00:09:15.182 } 00:09:15.182 Got JSON-RPC error response 00:09:15.182 response: 00:09:15.182 { 00:09:15.182 "code": -32603, 00:09:15.182 "message": "open file failed." 00:09:15.182 } 00:09:15.182 13:22:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:15.182 13:22:03 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:15.182 13:22:03 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:15.444 13:22:03 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:15.444 13:22:03 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65691 00:09:15.444 13:22:03 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65691 ']' 00:09:15.444 13:22:03 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65691 00:09:15.444 13:22:03 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:15.444 13:22:03 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.444 13:22:03 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65691 00:09:15.444 killing process with pid 65691 00:09:15.444 13:22:03 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.444 13:22:03 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.444 13:22:03 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65691' 00:09:15.444 13:22:03 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65691 00:09:15.444 13:22:03 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65691 00:09:16.916 ************************************ 00:09:16.916 END TEST nvme_rpc 00:09:16.916 ************************************ 00:09:16.916 00:09:16.916 real 0m3.447s 00:09:16.916 user 0m6.464s 00:09:16.916 sys 0m0.641s 00:09:16.916 13:22:05 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.916 13:22:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.916 13:22:05 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:16.916 13:22:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.916 13:22:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.916 13:22:05 -- common/autotest_common.sh@10 -- # set +x 00:09:16.916 ************************************ 00:09:16.916 START TEST nvme_rpc_timeouts 00:09:16.916 ************************************ 00:09:16.916 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:16.916 * Looking for test storage... 00:09:16.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:16.916 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.916 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.916 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:17.177 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:17.177 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.177 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.177 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.177 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.177 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.177 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.178 13:22:05 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:17.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.178 --rc genhtml_branch_coverage=1 00:09:17.178 --rc genhtml_function_coverage=1 00:09:17.178 --rc genhtml_legend=1 00:09:17.178 --rc geninfo_all_blocks=1 00:09:17.178 --rc geninfo_unexecuted_blocks=1 00:09:17.178 00:09:17.178 ' 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:17.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.178 --rc genhtml_branch_coverage=1 00:09:17.178 --rc genhtml_function_coverage=1 00:09:17.178 --rc genhtml_legend=1 00:09:17.178 --rc geninfo_all_blocks=1 00:09:17.178 --rc geninfo_unexecuted_blocks=1 00:09:17.178 00:09:17.178 ' 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:17.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.178 --rc genhtml_branch_coverage=1 00:09:17.178 --rc genhtml_function_coverage=1 00:09:17.178 --rc genhtml_legend=1 00:09:17.178 --rc geninfo_all_blocks=1 00:09:17.178 --rc geninfo_unexecuted_blocks=1 00:09:17.178 00:09:17.178 ' 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:17.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.178 --rc genhtml_branch_coverage=1 00:09:17.178 --rc genhtml_function_coverage=1 00:09:17.178 --rc genhtml_legend=1 00:09:17.178 --rc geninfo_all_blocks=1 00:09:17.178 --rc geninfo_unexecuted_blocks=1 00:09:17.178 00:09:17.178 ' 00:09:17.178 13:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:17.178 13:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65756 00:09:17.178 13:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65756 00:09:17.178 13:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65788 00:09:17.178 13:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:17.178 13:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:17.178 13:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65788 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65788 ']' 00:09:17.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.178 13:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:17.178 [2024-11-26 13:22:05.616781] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:09:17.178 [2024-11-26 13:22:05.617163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65788 ] 00:09:17.439 [2024-11-26 13:22:05.776051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.439 [2024-11-26 13:22:05.861550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.439 [2024-11-26 13:22:05.861552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.010 Checking default timeout settings: 00:09:18.010 13:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.010 13:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:18.010 13:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:18.010 13:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:18.270 Making settings changes with rpc: 00:09:18.270 13:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:18.270 13:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:18.531 Check default vs. modified settings: 00:09:18.531 13:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:18.531 13:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:18.792 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:18.792 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:18.792 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65756 00:09:18.792 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:18.792 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:18.792 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:18.792 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65756 00:09:18.792 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:18.792 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:18.793 Setting action_on_timeout is changed as expected. 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65756 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65756 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:18.793 Setting timeout_us is changed as expected. 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65756 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65756 00:09:18.793 Setting timeout_admin_us is changed as expected. 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65756 /tmp/settings_modified_65756 00:09:18.793 13:22:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65788 00:09:18.793 13:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65788 ']' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65788 00:09:18.793 13:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:18.793 13:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65788 00:09:18.793 killing process with pid 65788 00:09:18.793 13:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.793 13:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65788' 00:09:18.793 13:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65788 00:09:18.793 13:22:07 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65788 00:09:20.179 RPC TIMEOUT SETTING TEST PASSED. 00:09:20.180 13:22:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:20.180 00:09:20.180 real 0m3.150s 00:09:20.180 user 0m6.121s 00:09:20.180 sys 0m0.486s 00:09:20.180 ************************************ 00:09:20.180 END TEST nvme_rpc_timeouts 00:09:20.180 ************************************ 00:09:20.180 13:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.180 13:22:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:20.180 13:22:08 -- spdk/autotest.sh@239 -- # uname -s 00:09:20.180 13:22:08 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:20.180 13:22:08 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:20.180 13:22:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.180 13:22:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.180 13:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:20.180 ************************************ 00:09:20.180 START TEST sw_hotplug 00:09:20.180 ************************************ 00:09:20.180 13:22:08 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:20.180 * Looking for test storage... 00:09:20.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:20.180 13:22:08 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.180 13:22:08 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.180 13:22:08 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.180 13:22:08 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.180 13:22:08 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.442 13:22:08 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:20.442 13:22:08 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.442 13:22:08 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.442 --rc genhtml_branch_coverage=1 00:09:20.442 --rc genhtml_function_coverage=1 00:09:20.442 --rc genhtml_legend=1 00:09:20.442 --rc geninfo_all_blocks=1 00:09:20.442 --rc geninfo_unexecuted_blocks=1 00:09:20.442 00:09:20.442 ' 00:09:20.442 13:22:08 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.442 --rc genhtml_branch_coverage=1 00:09:20.442 --rc genhtml_function_coverage=1 00:09:20.442 --rc genhtml_legend=1 00:09:20.442 --rc geninfo_all_blocks=1 00:09:20.442 --rc geninfo_unexecuted_blocks=1 00:09:20.442 00:09:20.442 ' 00:09:20.442 13:22:08 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.442 --rc genhtml_branch_coverage=1 00:09:20.442 --rc genhtml_function_coverage=1 00:09:20.442 --rc genhtml_legend=1 00:09:20.442 --rc geninfo_all_blocks=1 00:09:20.442 --rc geninfo_unexecuted_blocks=1 00:09:20.442 00:09:20.442 ' 00:09:20.442 13:22:08 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.442 --rc genhtml_branch_coverage=1 00:09:20.442 --rc genhtml_function_coverage=1 00:09:20.442 --rc genhtml_legend=1 00:09:20.442 --rc geninfo_all_blocks=1 00:09:20.442 --rc geninfo_unexecuted_blocks=1 00:09:20.442 00:09:20.442 ' 00:09:20.442 13:22:08 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:20.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.704 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:20.704 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:20.704 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:20.704 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:20.704 13:22:09 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:20.704 13:22:09 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:20.704 13:22:09 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:20.704 13:22:09 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:20.704 13:22:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:20.966 13:22:09 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:20.966 13:22:09 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:20.966 13:22:09 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:20.966 13:22:09 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:21.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.228 Waiting for block devices as requested 00:09:21.490 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.490 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.490 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.751 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.043 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:27.043 13:22:15 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:27.043 13:22:15 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:27.043 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:27.043 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.043 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:27.616 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:27.616 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.616 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:27.877 13:22:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66644 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:27.877 13:22:16 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:27.877 13:22:16 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:27.877 13:22:16 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:27.877 13:22:16 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:27.877 13:22:16 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:27.877 13:22:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:28.139 Initializing NVMe Controllers 00:09:28.139 Attaching to 0000:00:10.0 00:09:28.139 Attaching to 0000:00:11.0 00:09:28.139 Attached to 0000:00:10.0 00:09:28.139 Attached to 0000:00:11.0 00:09:28.139 Initialization complete. Starting I/O... 00:09:28.139 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:28.139 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:28.139 00:09:29.083 QEMU NVMe Ctrl (12340 ): 2236 I/Os completed (+2236) 00:09:29.083 QEMU NVMe Ctrl (12341 ): 2240 I/Os completed (+2240) 00:09:29.083 00:09:30.027 QEMU NVMe Ctrl (12340 ): 5032 I/Os completed (+2796) 00:09:30.027 QEMU NVMe Ctrl (12341 ): 5037 I/Os completed (+2797) 00:09:30.027 00:09:30.969 QEMU NVMe Ctrl (12340 ): 7824 I/Os completed (+2792) 00:09:30.969 QEMU NVMe Ctrl (12341 ): 7831 I/Os completed (+2794) 00:09:30.969 00:09:32.349 QEMU NVMe Ctrl (12340 ): 11393 I/Os completed (+3569) 00:09:32.349 QEMU NVMe Ctrl (12341 ): 11387 I/Os completed (+3556) 00:09:32.349 00:09:32.919 QEMU NVMe Ctrl (12340 ): 15145 I/Os completed (+3752) 00:09:32.919 QEMU NVMe Ctrl (12341 ): 15144 I/Os completed (+3757) 00:09:32.919 00:09:33.861 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:33.861 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:33.861 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:33.861 [2024-11-26 13:22:22.278732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:33.861 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:33.861 [2024-11-26 13:22:22.279710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.279755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.279770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.279785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:33.861 [2024-11-26 13:22:22.281287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.281324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.281335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.281346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:33.861 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:33.861 [2024-11-26 13:22:22.303626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:33.861 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:33.861 [2024-11-26 13:22:22.304484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.304514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.304531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.304544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:33.861 [2024-11-26 13:22:22.305966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.306050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.306066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 [2024-11-26 13:22:22.306077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.861 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:33.861 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:33.861 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:33.861 EAL: Scan for (pci) bus failed. 00:09:33.861 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:33.861 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:33.861 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:34.122 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:34.122 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:34.122 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:34.122 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:34.122 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:34.122 Attaching to 0000:00:10.0 00:09:34.122 Attached to 0000:00:10.0 00:09:34.122 QEMU NVMe Ctrl (12340 ): 32 I/Os completed (+32) 00:09:34.122 00:09:34.123 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:34.123 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:34.123 13:22:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:34.123 Attaching to 0000:00:11.0 00:09:34.123 Attached to 0000:00:11.0 00:09:35.065 QEMU NVMe Ctrl (12340 ): 3852 I/Os completed (+3820) 00:09:35.065 QEMU NVMe Ctrl (12341 ): 3459 I/Os completed (+3459) 00:09:35.065 00:09:36.009 QEMU NVMe Ctrl (12340 ): 7625 I/Os completed (+3773) 00:09:36.009 QEMU NVMe Ctrl (12341 ): 7303 I/Os completed (+3844) 00:09:36.009 00:09:36.952 QEMU NVMe Ctrl (12340 ): 10429 I/Os completed (+2804) 00:09:36.952 QEMU NVMe Ctrl (12341 ): 10107 I/Os completed (+2804) 00:09:36.952 00:09:38.338 QEMU NVMe Ctrl (12340 ): 14169 I/Os completed (+3740) 00:09:38.338 QEMU NVMe Ctrl (12341 ): 13837 I/Os completed (+3730) 00:09:38.338 00:09:38.910 QEMU NVMe Ctrl (12340 ): 17926 I/Os completed (+3757) 00:09:38.910 QEMU NVMe Ctrl (12341 ): 17591 I/Os completed (+3754) 00:09:38.910 00:09:40.296 QEMU NVMe Ctrl (12340 ): 21783 I/Os completed (+3857) 00:09:40.296 QEMU NVMe Ctrl (12341 ): 21434 I/Os completed (+3843) 00:09:40.296 00:09:41.239 QEMU NVMe Ctrl (12340 ): 25560 I/Os completed (+3777) 00:09:41.239 QEMU NVMe Ctrl (12341 ): 25212 I/Os completed (+3778) 00:09:41.239 00:09:42.182 QEMU NVMe Ctrl (12340 ): 29525 I/Os completed (+3965) 00:09:42.182 QEMU NVMe Ctrl (12341 ): 29176 I/Os completed (+3964) 00:09:42.182 00:09:43.125 QEMU NVMe Ctrl (12340 ): 33296 I/Os completed (+3771) 00:09:43.125 QEMU NVMe Ctrl (12341 ): 32943 I/Os completed (+3767) 00:09:43.125 00:09:44.068 QEMU NVMe Ctrl (12340 ): 37215 I/Os completed (+3919) 00:09:44.068 QEMU NVMe Ctrl (12341 ): 36867 I/Os completed (+3924) 00:09:44.068 00:09:45.011 QEMU NVMe Ctrl (12340 ): 40945 I/Os completed (+3730) 00:09:45.011 QEMU NVMe Ctrl (12341 ): 40588 I/Os completed (+3721) 00:09:45.011 00:09:45.955 QEMU NVMe Ctrl (12340 ): 45025 I/Os completed (+4080) 00:09:45.955 QEMU NVMe Ctrl (12341 ): 44668 I/Os completed (+4080) 00:09:45.955 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:46.217 [2024-11-26 13:22:34.560576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:46.217 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:46.217 [2024-11-26 13:22:34.561596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.561707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.561737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.561796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:46.217 [2024-11-26 13:22:34.563397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.563495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.563523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.563577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:46.217 [2024-11-26 13:22:34.581040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:46.217 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:46.217 [2024-11-26 13:22:34.581989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.582074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.582104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.582150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:46.217 [2024-11-26 13:22:34.583545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.583595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.583619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 [2024-11-26 13:22:34.583688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:46.217 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:46.217 Attaching to 0000:00:10.0 00:09:46.217 Attached to 0000:00:10.0 00:09:46.478 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:46.478 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:46.478 13:22:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:46.478 Attaching to 0000:00:11.0 00:09:46.478 Attached to 0000:00:11.0 00:09:47.050 QEMU NVMe Ctrl (12340 ): 2150 I/Os completed (+2150) 00:09:47.050 QEMU NVMe Ctrl (12341 ): 1869 I/Os completed (+1869) 00:09:47.050 00:09:47.993 QEMU NVMe Ctrl (12340 ): 4834 I/Os completed (+2684) 00:09:47.993 QEMU NVMe Ctrl (12341 ): 4557 I/Os completed (+2688) 00:09:47.993 00:09:48.935 QEMU NVMe Ctrl (12340 ): 7502 I/Os completed (+2668) 00:09:48.935 QEMU NVMe Ctrl (12341 ): 7227 I/Os completed (+2670) 00:09:48.935 00:09:50.349 QEMU NVMe Ctrl (12340 ): 10781 I/Os completed (+3279) 00:09:50.349 QEMU NVMe Ctrl (12341 ): 10506 I/Os completed (+3279) 00:09:50.349 00:09:50.919 QEMU NVMe Ctrl (12340 ): 14496 I/Os completed (+3715) 00:09:50.919 QEMU NVMe Ctrl (12341 ): 14227 I/Os completed (+3721) 00:09:50.919 00:09:52.304 QEMU NVMe Ctrl (12340 ): 17885 I/Os completed (+3389) 00:09:52.304 QEMU NVMe Ctrl (12341 ): 17716 I/Os completed (+3489) 00:09:52.304 00:09:53.247 QEMU NVMe Ctrl (12340 ): 20608 I/Os completed (+2723) 00:09:53.248 QEMU NVMe Ctrl (12341 ): 20474 I/Os completed (+2758) 00:09:53.248 00:09:54.191 QEMU NVMe Ctrl (12340 ): 23244 I/Os completed (+2636) 00:09:54.191 QEMU NVMe Ctrl (12341 ): 23111 I/Os completed (+2637) 00:09:54.191 00:09:55.136 QEMU NVMe Ctrl (12340 ): 25896 I/Os completed (+2652) 00:09:55.136 QEMU NVMe Ctrl (12341 ): 25763 I/Os completed (+2652) 00:09:55.136 00:09:56.079 QEMU NVMe Ctrl (12340 ): 29086 I/Os completed (+3190) 00:09:56.079 QEMU NVMe Ctrl (12341 ): 28962 I/Os completed (+3199) 00:09:56.079 00:09:57.022 QEMU NVMe Ctrl (12340 ): 32839 I/Os completed (+3753) 00:09:57.022 QEMU NVMe Ctrl (12341 ): 32698 I/Os completed (+3736) 00:09:57.022 00:09:57.966 QEMU NVMe Ctrl (12340 ): 36561 I/Os completed (+3722) 00:09:57.966 QEMU NVMe Ctrl (12341 ): 36432 I/Os completed (+3734) 00:09:57.966 00:09:58.538 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:58.538 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:58.538 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:58.538 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:58.538 [2024-11-26 13:22:46.815207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:58.538 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:58.538 [2024-11-26 13:22:46.816250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.538 [2024-11-26 13:22:46.816364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.816396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.816485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:58.539 [2024-11-26 13:22:46.818117] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.818176] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.818189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.818201] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:58.539 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:58.539 [2024-11-26 13:22:46.838023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:58.539 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:58.539 [2024-11-26 13:22:46.838988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.839083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.839101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.839114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:58.539 [2024-11-26 13:22:46.840499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.840531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.840545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 [2024-11-26 13:22:46.840555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.539 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:58.539 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:58.539 EAL: Scan for (pci) bus failed. 00:09:58.539 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:58.539 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:58.539 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:58.539 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:58.539 13:22:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:58.539 13:22:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:58.539 13:22:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:58.539 13:22:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:58.539 13:22:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:58.539 Attaching to 0000:00:10.0 00:09:58.539 Attached to 0000:00:10.0 00:09:58.539 13:22:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:58.539 13:22:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:58.539 13:22:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:58.539 Attaching to 0000:00:11.0 00:09:58.539 Attached to 0000:00:11.0 00:09:58.539 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:58.539 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:58.539 [2024-11-26 13:22:47.088015] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:10.776 13:22:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:10.776 13:22:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:10.776 13:22:59 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.81 00:10:10.776 13:22:59 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.81 00:10:10.776 13:22:59 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:10.776 13:22:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:10:10.776 13:22:59 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:10:10.776 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 13:22:59 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:17.364 13:23:05 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66644 00:10:17.364 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66644) - No such process 00:10:17.364 13:23:05 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66644 00:10:17.364 13:23:05 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:17.364 13:23:05 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:17.364 13:23:05 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:17.364 13:23:05 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67186 00:10:17.365 13:23:05 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:17.365 13:23:05 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67186 00:10:17.365 13:23:05 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67186 ']' 00:10:17.365 13:23:05 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.365 13:23:05 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.365 13:23:05 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.365 13:23:05 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.365 13:23:05 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.365 13:23:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:17.365 [2024-11-26 13:23:05.192221] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:10:17.365 [2024-11-26 13:23:05.192378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67186 ] 00:10:17.365 [2024-11-26 13:23:05.351079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.365 [2024-11-26 13:23:05.475963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.624 13:23:06 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.625 13:23:06 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:17.625 13:23:06 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:17.625 13:23:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.625 13:23:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:17.625 13:23:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.625 13:23:06 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:17.625 13:23:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:17.625 13:23:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:17.625 13:23:06 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:17.625 13:23:06 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:17.625 13:23:06 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:17.625 13:23:06 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:17.625 13:23:06 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:17.625 13:23:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:17.625 13:23:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:17.625 13:23:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:17.625 13:23:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:17.625 13:23:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:24.216 13:23:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.216 13:23:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.216 13:23:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:24.216 [2024-11-26 13:23:12.261933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:24.216 [2024-11-26 13:23:12.263163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.216 [2024-11-26 13:23:12.263198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.216 [2024-11-26 13:23:12.263211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.216 [2024-11-26 13:23:12.263228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.216 [2024-11-26 13:23:12.263236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.216 [2024-11-26 13:23:12.263244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.216 [2024-11-26 13:23:12.263251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.216 [2024-11-26 13:23:12.263259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.216 [2024-11-26 13:23:12.263265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.216 [2024-11-26 13:23:12.263276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.216 [2024-11-26 13:23:12.263283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.216 [2024-11-26 13:23:12.263291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:24.216 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:24.216 13:23:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.216 13:23:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.216 [2024-11-26 13:23:12.761923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:24.216 [2024-11-26 13:23:12.763068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.216 [2024-11-26 13:23:12.763097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.216 [2024-11-26 13:23:12.763107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.216 [2024-11-26 13:23:12.763119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.217 [2024-11-26 13:23:12.763127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.217 [2024-11-26 13:23:12.763134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.217 [2024-11-26 13:23:12.763142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.217 [2024-11-26 13:23:12.763149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.217 [2024-11-26 13:23:12.763157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.217 [2024-11-26 13:23:12.763164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.217 [2024-11-26 13:23:12.763172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.217 [2024-11-26 13:23:12.763178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.217 13:23:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.479 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:24.479 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:24.479 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.479 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.479 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.479 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.479 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.479 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.479 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.479 13:23:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.479 13:23:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.479 13:23:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.479 13:23:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:36.719 13:23:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.719 13:23:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:36.719 13:23:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:36.719 13:23:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.719 13:23:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:36.719 13:23:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:36.719 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:36.719 [2024-11-26 13:23:25.162160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:36.719 [2024-11-26 13:23:25.163400] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.719 [2024-11-26 13:23:25.163521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.719 [2024-11-26 13:23:25.164159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.719 [2024-11-26 13:23:25.164434] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.719 [2024-11-26 13:23:25.164719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.719 [2024-11-26 13:23:25.164948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.719 [2024-11-26 13:23:25.165221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.719 [2024-11-26 13:23:25.165389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.719 [2024-11-26 13:23:25.165658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.719 [2024-11-26 13:23:25.165852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.719 [2024-11-26 13:23:25.165996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.719 [2024-11-26 13:23:25.166185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.292 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:37.292 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.292 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.292 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.292 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.292 13:23:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.292 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.292 13:23:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.292 [2024-11-26 13:23:25.662171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:37.292 [2024-11-26 13:23:25.667283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.292 [2024-11-26 13:23:25.667400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.292 [2024-11-26 13:23:25.667489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.292 [2024-11-26 13:23:25.667528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.292 [2024-11-26 13:23:25.667551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.292 [2024-11-26 13:23:25.667581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.292 [2024-11-26 13:23:25.667614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.292 [2024-11-26 13:23:25.667635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.292 [2024-11-26 13:23:25.667701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.292 [2024-11-26 13:23:25.667733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.292 [2024-11-26 13:23:25.667756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.292 [2024-11-26 13:23:25.667860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.292 13:23:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.292 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:37.292 13:23:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.865 13:23:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.865 13:23:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.865 13:23:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.865 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:38.126 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:38.126 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.126 13:23:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.368 13:23:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.368 13:23:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.368 13:23:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.368 13:23:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.368 13:23:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.368 13:23:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:50.368 13:23:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:50.368 [2024-11-26 13:23:38.562411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:50.368 [2024-11-26 13:23:38.563585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.368 [2024-11-26 13:23:38.563619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.368 [2024-11-26 13:23:38.563630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.368 [2024-11-26 13:23:38.563646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.368 [2024-11-26 13:23:38.563653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.368 [2024-11-26 13:23:38.563663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.368 [2024-11-26 13:23:38.563670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.368 [2024-11-26 13:23:38.563678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.368 [2024-11-26 13:23:38.563684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.368 [2024-11-26 13:23:38.563692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.368 [2024-11-26 13:23:38.563698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.368 [2024-11-26 13:23:38.563706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.630 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:50.630 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.630 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.630 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.630 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.630 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.630 13:23:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.630 13:23:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.630 13:23:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.630 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:50.630 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:50.630 [2024-11-26 13:23:39.162700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:50.630 [2024-11-26 13:23:39.163824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.630 [2024-11-26 13:23:39.163854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.630 [2024-11-26 13:23:39.163866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.630 [2024-11-26 13:23:39.163877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.630 [2024-11-26 13:23:39.163886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.630 [2024-11-26 13:23:39.163893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.630 [2024-11-26 13:23:39.163902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.630 [2024-11-26 13:23:39.163908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.630 [2024-11-26 13:23:39.163917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.630 [2024-11-26 13:23:39.163924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.630 [2024-11-26 13:23:39.163932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.630 [2024-11-26 13:23:39.163938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.211 13:23:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.211 13:23:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.211 13:23:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.211 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.473 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.473 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.473 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.473 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.473 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.473 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.473 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.473 13:23:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.74 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.74 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.74 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.74 2 00:11:03.712 remove_attach_helper took 45.74s to complete (handling 2 nvme drive(s)) 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:03.712 13:23:51 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:03.712 13:23:51 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.301 13:23:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.301 13:23:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.301 13:23:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.301 13:23:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.301 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:10.301 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.301 [2024-11-26 13:23:58.034385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:10.301 [2024-11-26 13:23:58.035267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.301 [2024-11-26 13:23:58.035302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.301 [2024-11-26 13:23:58.035312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.301 [2024-11-26 13:23:58.035329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.301 [2024-11-26 13:23:58.035338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.301 [2024-11-26 13:23:58.035346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.301 [2024-11-26 13:23:58.035353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.301 [2024-11-26 13:23:58.035362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.301 [2024-11-26 13:23:58.035369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.301 [2024-11-26 13:23:58.035377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.301 [2024-11-26 13:23:58.035383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.301 [2024-11-26 13:23:58.035393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.301 [2024-11-26 13:23:58.434383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:10.301 [2024-11-26 13:23:58.435228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.302 [2024-11-26 13:23:58.435257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.302 [2024-11-26 13:23:58.435268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.302 [2024-11-26 13:23:58.435279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.302 [2024-11-26 13:23:58.435288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.302 [2024-11-26 13:23:58.435295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.302 [2024-11-26 13:23:58.435303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.302 [2024-11-26 13:23:58.435309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.302 [2024-11-26 13:23:58.435317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.302 [2024-11-26 13:23:58.435324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.302 [2024-11-26 13:23:58.435331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.302 [2024-11-26 13:23:58.435337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.302 13:23:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.302 13:23:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.302 13:23:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.302 13:23:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.541 13:24:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.541 13:24:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.541 13:24:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.541 [2024-11-26 13:24:10.834610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:22.541 [2024-11-26 13:24:10.835955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.541 [2024-11-26 13:24:10.835992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.541 [2024-11-26 13:24:10.836002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.541 [2024-11-26 13:24:10.836018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.541 [2024-11-26 13:24:10.836026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.541 [2024-11-26 13:24:10.836034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.541 [2024-11-26 13:24:10.836042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.541 [2024-11-26 13:24:10.836050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.541 [2024-11-26 13:24:10.836056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.541 [2024-11-26 13:24:10.836064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.541 [2024-11-26 13:24:10.836070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.541 [2024-11-26 13:24:10.836078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.541 13:24:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.541 13:24:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.541 13:24:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:22.541 13:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.115 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:23.115 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.115 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.115 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.115 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.115 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.115 13:24:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.115 13:24:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.115 13:24:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.115 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:23.115 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.115 [2024-11-26 13:24:11.534610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:23.115 [2024-11-26 13:24:11.535518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.115 [2024-11-26 13:24:11.535544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.115 [2024-11-26 13:24:11.535557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.115 [2024-11-26 13:24:11.535571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.115 [2024-11-26 13:24:11.535582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.115 [2024-11-26 13:24:11.535588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.115 [2024-11-26 13:24:11.535597] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.115 [2024-11-26 13:24:11.535604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.116 [2024-11-26 13:24:11.535612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.116 [2024-11-26 13:24:11.535619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.116 [2024-11-26 13:24:11.535627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.116 [2024-11-26 13:24:11.535633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.377 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:23.377 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.377 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.377 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.377 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.377 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.377 13:24:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.377 13:24:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.638 13:24:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.638 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:23.638 13:24:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.638 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.638 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.638 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:23.638 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:23.638 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.638 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.638 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.638 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:23.638 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:23.900 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.900 13:24:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.136 13:24:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.136 13:24:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.136 13:24:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.136 13:24:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.136 13:24:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.136 13:24:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:36.136 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.136 [2024-11-26 13:24:24.334860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.136 [2024-11-26 13:24:24.337279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.136 [2024-11-26 13:24:24.337319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.136 [2024-11-26 13:24:24.337330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.136 [2024-11-26 13:24:24.337348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.136 [2024-11-26 13:24:24.337355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.136 [2024-11-26 13:24:24.337364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.136 [2024-11-26 13:24:24.337371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.136 [2024-11-26 13:24:24.337381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.136 [2024-11-26 13:24:24.337388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.136 [2024-11-26 13:24:24.337396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.136 [2024-11-26 13:24:24.337402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.136 [2024-11-26 13:24:24.337410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.398 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:36.398 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.398 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.398 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.398 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.398 13:24:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.398 13:24:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.398 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.398 13:24:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.398 [2024-11-26 13:24:24.834859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.398 [2024-11-26 13:24:24.835728] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.398 [2024-11-26 13:24:24.835757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.398 [2024-11-26 13:24:24.835768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.398 [2024-11-26 13:24:24.835780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.398 [2024-11-26 13:24:24.835789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.398 [2024-11-26 13:24:24.835796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.398 [2024-11-26 13:24:24.835804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.398 [2024-11-26 13:24:24.835811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.398 [2024-11-26 13:24:24.835819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.398 [2024-11-26 13:24:24.835826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.398 [2024-11-26 13:24:24.835836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.398 [2024-11-26 13:24:24.835842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.398 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:36.398 13:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.973 13:24:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.973 13:24:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.973 13:24:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.973 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:37.234 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:37.234 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.234 13:24:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.72 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.72 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:11:49.468 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:49.468 13:24:37 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67186 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67186 ']' 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67186 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67186 00:11:49.468 killing process with pid 67186 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67186' 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67186 00:11:49.468 13:24:37 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67186 00:11:50.413 13:24:38 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:50.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:51.245 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:51.245 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:51.245 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:51.245 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:51.245 00:11:51.245 real 2m31.194s 00:11:51.245 user 1m52.695s 00:11:51.245 sys 0m17.026s 00:11:51.245 ************************************ 00:11:51.245 END TEST sw_hotplug 00:11:51.245 ************************************ 00:11:51.245 13:24:39 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.245 13:24:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.508 13:24:39 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:51.508 13:24:39 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:51.508 13:24:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.508 13:24:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.508 13:24:39 -- common/autotest_common.sh@10 -- # set +x 00:11:51.508 ************************************ 00:11:51.508 START TEST nvme_xnvme 00:11:51.508 ************************************ 00:11:51.508 13:24:39 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:51.508 * Looking for test storage... 00:11:51.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.508 13:24:39 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.508 13:24:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.508 13:24:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:51.508 13:24:40 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.508 13:24:40 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:51.508 13:24:40 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.508 13:24:40 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:51.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.508 --rc genhtml_branch_coverage=1 00:11:51.508 --rc genhtml_function_coverage=1 00:11:51.508 --rc genhtml_legend=1 00:11:51.508 --rc geninfo_all_blocks=1 00:11:51.508 --rc geninfo_unexecuted_blocks=1 00:11:51.508 00:11:51.508 ' 00:11:51.508 13:24:40 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:51.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.508 --rc genhtml_branch_coverage=1 00:11:51.508 --rc genhtml_function_coverage=1 00:11:51.508 --rc genhtml_legend=1 00:11:51.508 --rc geninfo_all_blocks=1 00:11:51.508 --rc geninfo_unexecuted_blocks=1 00:11:51.508 00:11:51.508 ' 00:11:51.508 13:24:40 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:51.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.508 --rc genhtml_branch_coverage=1 00:11:51.508 --rc genhtml_function_coverage=1 00:11:51.509 --rc genhtml_legend=1 00:11:51.509 --rc geninfo_all_blocks=1 00:11:51.509 --rc geninfo_unexecuted_blocks=1 00:11:51.509 00:11:51.509 ' 00:11:51.509 13:24:40 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:51.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.509 --rc genhtml_branch_coverage=1 00:11:51.509 --rc genhtml_function_coverage=1 00:11:51.509 --rc genhtml_legend=1 00:11:51.509 --rc geninfo_all_blocks=1 00:11:51.509 --rc geninfo_unexecuted_blocks=1 00:11:51.509 00:11:51.509 ' 00:11:51.509 13:24:40 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:51.509 13:24:40 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:51.509 13:24:40 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:51.509 13:24:40 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:51.509 13:24:40 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:51.509 13:24:40 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:51.509 13:24:40 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:51.509 13:24:40 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:51.509 13:24:40 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:51.509 13:24:40 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:51.509 13:24:40 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:51.510 13:24:40 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:51.510 13:24:40 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:51.510 13:24:40 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:51.510 #define SPDK_CONFIG_H 00:11:51.510 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:51.510 #define SPDK_CONFIG_APPS 1 00:11:51.510 #define SPDK_CONFIG_ARCH native 00:11:51.510 #define SPDK_CONFIG_ASAN 1 00:11:51.510 #undef SPDK_CONFIG_AVAHI 00:11:51.510 #undef SPDK_CONFIG_CET 00:11:51.510 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:51.510 #define SPDK_CONFIG_COVERAGE 1 00:11:51.510 #define SPDK_CONFIG_CROSS_PREFIX 00:11:51.510 #undef SPDK_CONFIG_CRYPTO 00:11:51.510 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:51.510 #undef SPDK_CONFIG_CUSTOMOCF 00:11:51.510 #undef SPDK_CONFIG_DAOS 00:11:51.510 #define SPDK_CONFIG_DAOS_DIR 00:11:51.510 #define SPDK_CONFIG_DEBUG 1 00:11:51.510 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:51.510 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:51.510 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:51.510 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:51.510 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:51.510 #undef SPDK_CONFIG_DPDK_UADK 00:11:51.510 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:51.510 #define SPDK_CONFIG_EXAMPLES 1 00:11:51.510 #undef SPDK_CONFIG_FC 00:11:51.510 #define SPDK_CONFIG_FC_PATH 00:11:51.510 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:51.510 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:51.510 #define SPDK_CONFIG_FSDEV 1 00:11:51.510 #undef SPDK_CONFIG_FUSE 00:11:51.510 #undef SPDK_CONFIG_FUZZER 00:11:51.510 #define SPDK_CONFIG_FUZZER_LIB 00:11:51.510 #undef SPDK_CONFIG_GOLANG 00:11:51.510 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:51.510 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:51.510 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:51.510 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:51.510 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:51.510 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:51.510 #undef SPDK_CONFIG_HAVE_LZ4 00:11:51.510 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:51.510 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:51.510 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:51.510 #define SPDK_CONFIG_IDXD 1 00:11:51.510 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:51.510 #undef SPDK_CONFIG_IPSEC_MB 00:11:51.510 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:51.510 #define SPDK_CONFIG_ISAL 1 00:11:51.510 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:51.510 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:51.510 #define SPDK_CONFIG_LIBDIR 00:11:51.510 #undef SPDK_CONFIG_LTO 00:11:51.510 #define SPDK_CONFIG_MAX_LCORES 128 00:11:51.510 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:51.510 #define SPDK_CONFIG_NVME_CUSE 1 00:11:51.510 #undef SPDK_CONFIG_OCF 00:11:51.510 #define SPDK_CONFIG_OCF_PATH 00:11:51.510 #define SPDK_CONFIG_OPENSSL_PATH 00:11:51.510 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:51.510 #define SPDK_CONFIG_PGO_DIR 00:11:51.510 #undef SPDK_CONFIG_PGO_USE 00:11:51.510 #define SPDK_CONFIG_PREFIX /usr/local 00:11:51.510 #undef SPDK_CONFIG_RAID5F 00:11:51.510 #undef SPDK_CONFIG_RBD 00:11:51.510 #define SPDK_CONFIG_RDMA 1 00:11:51.510 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:51.510 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:51.510 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:51.510 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:51.510 #define SPDK_CONFIG_SHARED 1 00:11:51.510 #undef SPDK_CONFIG_SMA 00:11:51.510 #define SPDK_CONFIG_TESTS 1 00:11:51.510 #undef SPDK_CONFIG_TSAN 00:11:51.510 #define SPDK_CONFIG_UBLK 1 00:11:51.510 #define SPDK_CONFIG_UBSAN 1 00:11:51.510 #undef SPDK_CONFIG_UNIT_TESTS 00:11:51.510 #undef SPDK_CONFIG_URING 00:11:51.510 #define SPDK_CONFIG_URING_PATH 00:11:51.510 #undef SPDK_CONFIG_URING_ZNS 00:11:51.510 #undef SPDK_CONFIG_USDT 00:11:51.510 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:51.510 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:51.510 #undef SPDK_CONFIG_VFIO_USER 00:11:51.510 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:51.510 #define SPDK_CONFIG_VHOST 1 00:11:51.510 #define SPDK_CONFIG_VIRTIO 1 00:11:51.510 #undef SPDK_CONFIG_VTUNE 00:11:51.510 #define SPDK_CONFIG_VTUNE_DIR 00:11:51.510 #define SPDK_CONFIG_WERROR 1 00:11:51.510 #define SPDK_CONFIG_WPDK_DIR 00:11:51.510 #define SPDK_CONFIG_XNVME 1 00:11:51.510 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:51.510 13:24:40 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:51.510 13:24:40 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.510 13:24:40 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.510 13:24:40 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.510 13:24:40 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.510 13:24:40 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.510 13:24:40 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.510 13:24:40 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.510 13:24:40 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.510 13:24:40 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:51.510 13:24:40 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.510 13:24:40 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:51.510 13:24:40 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:51.511 13:24:40 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:51.511 13:24:40 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:51.512 13:24:40 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68572 ]] 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68572 00:11:51.775 13:24:40 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.z2KSyv 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.z2KSyv/tests/xnvme /tmp/spdk.z2KSyv 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13956485120 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5611532288 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260621312 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265384960 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13956485120 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5611532288 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98841915392 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=860864512 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:51.776 * Looking for test storage... 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13956485120 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:51.776 13:24:40 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.777 --rc genhtml_branch_coverage=1 00:11:51.777 --rc genhtml_function_coverage=1 00:11:51.777 --rc genhtml_legend=1 00:11:51.777 --rc geninfo_all_blocks=1 00:11:51.777 --rc geninfo_unexecuted_blocks=1 00:11:51.777 00:11:51.777 ' 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.777 --rc genhtml_branch_coverage=1 00:11:51.777 --rc genhtml_function_coverage=1 00:11:51.777 --rc genhtml_legend=1 00:11:51.777 --rc geninfo_all_blocks=1 00:11:51.777 --rc geninfo_unexecuted_blocks=1 00:11:51.777 00:11:51.777 ' 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.777 --rc genhtml_branch_coverage=1 00:11:51.777 --rc genhtml_function_coverage=1 00:11:51.777 --rc genhtml_legend=1 00:11:51.777 --rc geninfo_all_blocks=1 00:11:51.777 --rc geninfo_unexecuted_blocks=1 00:11:51.777 00:11:51.777 ' 00:11:51.777 13:24:40 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.777 --rc genhtml_branch_coverage=1 00:11:51.777 --rc genhtml_function_coverage=1 00:11:51.777 --rc genhtml_legend=1 00:11:51.777 --rc geninfo_all_blocks=1 00:11:51.777 --rc geninfo_unexecuted_blocks=1 00:11:51.777 00:11:51.777 ' 00:11:51.777 13:24:40 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.777 13:24:40 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.777 13:24:40 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.777 13:24:40 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.777 13:24:40 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.777 13:24:40 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:51.777 13:24:40 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:51.777 13:24:40 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:51.778 13:24:40 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:51.778 13:24:40 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:51.778 13:24:40 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:51.778 13:24:40 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:51.778 13:24:40 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:51.778 13:24:40 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:51.778 13:24:40 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:51.778 13:24:40 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:51.778 13:24:40 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:52.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:52.303 Waiting for block devices as requested 00:11:52.303 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.303 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.303 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.564 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:57.858 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:57.858 13:24:46 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:57.858 13:24:46 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:57.858 13:24:46 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:58.121 13:24:46 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:58.121 13:24:46 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:58.121 13:24:46 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:58.121 13:24:46 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:58.121 13:24:46 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:58.121 No valid GPT data, bailing 00:11:58.121 13:24:46 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:58.383 13:24:46 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:58.383 13:24:46 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:58.383 13:24:46 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:58.383 13:24:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.383 13:24:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.383 13:24:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.383 ************************************ 00:11:58.383 START TEST xnvme_rpc 00:11:58.383 ************************************ 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68965 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68965 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68965 ']' 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.383 13:24:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:58.383 [2024-11-26 13:24:46.809681] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:11:58.383 [2024-11-26 13:24:46.809825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68965 ] 00:11:58.645 [2024-11-26 13:24:46.975325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.645 [2024-11-26 13:24:47.093875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.220 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.220 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:59.220 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:59.220 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.220 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.481 xnvme_bdev 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68965 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68965 ']' 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68965 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68965 00:11:59.481 killing process with pid 68965 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68965' 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68965 00:11:59.481 13:24:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68965 00:12:01.395 00:12:01.395 real 0m2.874s 00:12:01.395 user 0m2.840s 00:12:01.395 sys 0m0.482s 00:12:01.395 13:24:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.396 ************************************ 00:12:01.396 END TEST xnvme_rpc 00:12:01.396 ************************************ 00:12:01.396 13:24:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.396 13:24:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:01.396 13:24:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.396 13:24:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.396 13:24:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.396 ************************************ 00:12:01.396 START TEST xnvme_bdevperf 00:12:01.396 ************************************ 00:12:01.396 13:24:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:01.396 13:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:01.396 13:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:01.396 13:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:01.396 13:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:01.396 13:24:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:01.396 13:24:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:01.396 13:24:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:01.396 { 00:12:01.396 "subsystems": [ 00:12:01.396 { 00:12:01.396 "subsystem": "bdev", 00:12:01.396 "config": [ 00:12:01.396 { 00:12:01.396 "params": { 00:12:01.396 "io_mechanism": "libaio", 00:12:01.396 "conserve_cpu": false, 00:12:01.396 "filename": "/dev/nvme0n1", 00:12:01.396 "name": "xnvme_bdev" 00:12:01.396 }, 00:12:01.396 "method": "bdev_xnvme_create" 00:12:01.396 }, 00:12:01.396 { 00:12:01.396 "method": "bdev_wait_for_examine" 00:12:01.396 } 00:12:01.396 ] 00:12:01.396 } 00:12:01.396 ] 00:12:01.396 } 00:12:01.396 [2024-11-26 13:24:49.735681] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:12:01.396 [2024-11-26 13:24:49.735825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69039 ] 00:12:01.396 [2024-11-26 13:24:49.899780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.655 [2024-11-26 13:24:50.017011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.916 Running I/O for 5 seconds... 00:12:03.803 26993.00 IOPS, 105.44 MiB/s [2024-11-26T13:24:53.761Z] 26894.50 IOPS, 105.06 MiB/s [2024-11-26T13:24:54.333Z] 26653.00 IOPS, 104.11 MiB/s [2024-11-26T13:24:55.721Z] 26632.75 IOPS, 104.03 MiB/s 00:12:07.151 Latency(us) 00:12:07.151 [2024-11-26T13:24:55.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.151 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:07.151 xnvme_bdev : 5.00 26672.50 104.19 0.00 0.00 2394.50 482.07 7662.67 00:12:07.151 [2024-11-26T13:24:55.721Z] =================================================================================================================== 00:12:07.151 [2024-11-26T13:24:55.721Z] Total : 26672.50 104.19 0.00 0.00 2394.50 482.07 7662.67 00:12:07.724 13:24:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:07.724 13:24:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:07.724 13:24:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:07.724 13:24:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:07.725 13:24:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:07.725 { 00:12:07.725 "subsystems": [ 00:12:07.725 { 00:12:07.725 "subsystem": "bdev", 00:12:07.725 "config": [ 00:12:07.725 { 00:12:07.725 "params": { 00:12:07.725 "io_mechanism": "libaio", 00:12:07.725 "conserve_cpu": false, 00:12:07.725 "filename": "/dev/nvme0n1", 00:12:07.725 "name": "xnvme_bdev" 00:12:07.725 }, 00:12:07.725 "method": "bdev_xnvme_create" 00:12:07.725 }, 00:12:07.725 { 00:12:07.725 "method": "bdev_wait_for_examine" 00:12:07.725 } 00:12:07.725 ] 00:12:07.725 } 00:12:07.725 ] 00:12:07.725 } 00:12:07.725 [2024-11-26 13:24:56.200562] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:12:07.725 [2024-11-26 13:24:56.200914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69114 ] 00:12:07.986 [2024-11-26 13:24:56.364158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.986 [2024-11-26 13:24:56.484874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.247 Running I/O for 5 seconds... 00:12:10.580 33283.00 IOPS, 130.01 MiB/s [2024-11-26T13:25:00.103Z] 34430.00 IOPS, 134.49 MiB/s [2024-11-26T13:25:01.047Z] 34810.67 IOPS, 135.98 MiB/s [2024-11-26T13:25:01.994Z] 34843.75 IOPS, 136.11 MiB/s [2024-11-26T13:25:01.994Z] 34456.80 IOPS, 134.60 MiB/s 00:12:13.424 Latency(us) 00:12:13.424 [2024-11-26T13:25:01.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.424 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:13.424 xnvme_bdev : 5.01 34421.45 134.46 0.00 0.00 1854.46 406.45 6856.07 00:12:13.424 [2024-11-26T13:25:01.994Z] =================================================================================================================== 00:12:13.424 [2024-11-26T13:25:01.994Z] Total : 34421.45 134.46 0.00 0.00 1854.46 406.45 6856.07 00:12:14.370 00:12:14.370 real 0m12.921s 00:12:14.370 user 0m5.414s 00:12:14.370 sys 0m6.146s 00:12:14.370 13:25:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.370 13:25:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:14.370 ************************************ 00:12:14.370 END TEST xnvme_bdevperf 00:12:14.370 ************************************ 00:12:14.370 13:25:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:14.370 13:25:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.370 13:25:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.370 13:25:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.370 ************************************ 00:12:14.370 START TEST xnvme_fio_plugin 00:12:14.370 ************************************ 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:14.370 13:25:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.370 { 00:12:14.370 "subsystems": [ 00:12:14.370 { 00:12:14.370 "subsystem": "bdev", 00:12:14.370 "config": [ 00:12:14.370 { 00:12:14.370 "params": { 00:12:14.370 "io_mechanism": "libaio", 00:12:14.370 "conserve_cpu": false, 00:12:14.370 "filename": "/dev/nvme0n1", 00:12:14.370 "name": "xnvme_bdev" 00:12:14.370 }, 00:12:14.370 "method": "bdev_xnvme_create" 00:12:14.370 }, 00:12:14.370 { 00:12:14.370 "method": "bdev_wait_for_examine" 00:12:14.370 } 00:12:14.370 ] 00:12:14.370 } 00:12:14.370 ] 00:12:14.370 } 00:12:14.370 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:14.370 fio-3.35 00:12:14.370 Starting 1 thread 00:12:20.965 00:12:20.965 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69233: Tue Nov 26 13:25:08 2024 00:12:20.965 read: IOPS=33.7k, BW=132MiB/s (138MB/s)(659MiB/5002msec) 00:12:20.965 slat (usec): min=4, max=2437, avg=18.65, stdev=91.83 00:12:20.965 clat (usec): min=106, max=5500, avg=1381.98, stdev=524.10 00:12:20.965 lat (usec): min=181, max=5506, avg=1400.63, stdev=515.28 00:12:20.965 clat percentiles (usec): 00:12:20.965 | 1.00th=[ 297], 5.00th=[ 578], 10.00th=[ 750], 20.00th=[ 947], 00:12:20.965 | 30.00th=[ 1090], 40.00th=[ 1237], 50.00th=[ 1369], 60.00th=[ 1483], 00:12:20.965 | 70.00th=[ 1614], 80.00th=[ 1778], 90.00th=[ 2008], 95.00th=[ 2245], 00:12:20.965 | 99.00th=[ 2900], 99.50th=[ 3163], 99.90th=[ 3785], 99.95th=[ 4047], 00:12:20.965 | 99.99th=[ 4555] 00:12:20.965 bw ( KiB/s): min=119432, max=140744, per=99.72%, avg=134579.56, stdev=6388.78, samples=9 00:12:20.965 iops : min=29858, max=35186, avg=33644.89, stdev=1597.19, samples=9 00:12:20.965 lat (usec) : 250=0.52%, 500=3.00%, 750=6.65%, 1000=13.28% 00:12:20.965 lat (msec) : 2=66.06%, 4=10.43%, 10=0.06% 00:12:20.965 cpu : usr=49.75%, sys=42.09%, ctx=12, majf=0, minf=764 00:12:20.965 IO depths : 1=0.6%, 2=1.5%, 4=3.4%, 8=8.5%, 16=22.4%, 32=61.5%, >=64=2.1% 00:12:20.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.965 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:20.965 issued rwts: total=168759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.965 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:20.965 00:12:20.965 Run status group 0 (all jobs): 00:12:20.965 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=659MiB (691MB), run=5002-5002msec 00:12:21.227 ----------------------------------------------------- 00:12:21.227 Suppressions used: 00:12:21.227 count bytes template 00:12:21.227 1 11 /usr/src/fio/parse.c 00:12:21.227 1 8 libtcmalloc_minimal.so 00:12:21.227 1 904 libcrypto.so 00:12:21.227 ----------------------------------------------------- 00:12:21.227 00:12:21.227 13:25:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:21.227 13:25:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.227 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.227 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:21.227 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:21.227 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:21.227 13:25:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:21.227 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:21.228 { 00:12:21.228 "subsystems": [ 00:12:21.228 { 00:12:21.228 "subsystem": "bdev", 00:12:21.228 "config": [ 00:12:21.228 { 00:12:21.228 "params": { 00:12:21.228 "io_mechanism": "libaio", 00:12:21.228 "conserve_cpu": false, 00:12:21.228 "filename": "/dev/nvme0n1", 00:12:21.228 "name": "xnvme_bdev" 00:12:21.228 }, 00:12:21.228 "method": "bdev_xnvme_create" 00:12:21.228 }, 00:12:21.228 { 00:12:21.228 "method": "bdev_wait_for_examine" 00:12:21.228 } 00:12:21.228 ] 00:12:21.228 } 00:12:21.228 ] 00:12:21.228 } 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:21.228 13:25:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.489 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:21.489 fio-3.35 00:12:21.489 Starting 1 thread 00:12:28.078 00:12:28.078 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69325: Tue Nov 26 13:25:15 2024 00:12:28.079 write: IOPS=33.9k, BW=133MiB/s (139MB/s)(663MiB/5001msec); 0 zone resets 00:12:28.079 slat (usec): min=4, max=3345, avg=22.45, stdev=86.16 00:12:28.079 clat (usec): min=104, max=9079, avg=1269.56, stdev=572.62 00:12:28.079 lat (usec): min=191, max=9084, avg=1292.01, stdev=566.60 00:12:28.079 clat percentiles (usec): 00:12:28.079 | 1.00th=[ 265], 5.00th=[ 445], 10.00th=[ 594], 20.00th=[ 791], 00:12:28.079 | 30.00th=[ 947], 40.00th=[ 1074], 50.00th=[ 1221], 60.00th=[ 1352], 00:12:28.079 | 70.00th=[ 1516], 80.00th=[ 1696], 90.00th=[ 1958], 95.00th=[ 2245], 00:12:28.079 | 99.00th=[ 2966], 99.50th=[ 3326], 99.90th=[ 4015], 99.95th=[ 4293], 00:12:28.079 | 99.99th=[ 7963] 00:12:28.079 bw ( KiB/s): min=112216, max=156256, per=99.47%, avg=135042.67, stdev=16216.08, samples=9 00:12:28.079 iops : min=28054, max=39064, avg=33760.67, stdev=4054.02, samples=9 00:12:28.079 lat (usec) : 250=0.86%, 500=5.83%, 750=11.02%, 1000=16.32% 00:12:28.079 lat (msec) : 2=56.85%, 4=9.02%, 10=0.11% 00:12:28.079 cpu : usr=37.46%, sys=51.78%, ctx=30, majf=0, minf=764 00:12:28.079 IO depths : 1=0.4%, 2=1.0%, 4=3.0%, 8=8.4%, 16=23.6%, 32=61.5%, >=64=2.1% 00:12:28.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.079 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:28.079 issued rwts: total=0,169743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.079 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:28.079 00:12:28.079 Run status group 0 (all jobs): 00:12:28.079 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=663MiB (695MB), run=5001-5001msec 00:12:28.079 ----------------------------------------------------- 00:12:28.079 Suppressions used: 00:12:28.079 count bytes template 00:12:28.079 1 11 /usr/src/fio/parse.c 00:12:28.079 1 8 libtcmalloc_minimal.so 00:12:28.079 1 904 libcrypto.so 00:12:28.079 ----------------------------------------------------- 00:12:28.079 00:12:28.079 00:12:28.079 real 0m13.894s 00:12:28.079 user 0m7.218s 00:12:28.079 sys 0m5.327s 00:12:28.079 13:25:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.079 13:25:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:28.079 ************************************ 00:12:28.079 END TEST xnvme_fio_plugin 00:12:28.079 ************************************ 00:12:28.079 13:25:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:28.079 13:25:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:28.079 13:25:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:28.079 13:25:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:28.079 13:25:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.079 13:25:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.079 13:25:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.079 ************************************ 00:12:28.079 START TEST xnvme_rpc 00:12:28.079 ************************************ 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69411 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69411 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69411 ']' 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:28.079 13:25:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.340 [2024-11-26 13:25:16.721845] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:12:28.340 [2024-11-26 13:25:16.722014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69411 ] 00:12:28.340 [2024-11-26 13:25:16.892233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.602 [2024-11-26 13:25:17.013553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.176 xnvme_bdev 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.176 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69411 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69411 ']' 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69411 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69411 00:12:29.438 killing process with pid 69411 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69411' 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69411 00:12:29.438 13:25:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69411 00:12:31.355 00:12:31.355 real 0m2.936s 00:12:31.355 user 0m2.910s 00:12:31.355 sys 0m0.507s 00:12:31.355 13:25:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.355 13:25:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.355 ************************************ 00:12:31.355 END TEST xnvme_rpc 00:12:31.355 ************************************ 00:12:31.355 13:25:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:31.355 13:25:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.355 13:25:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.355 13:25:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.355 ************************************ 00:12:31.355 START TEST xnvme_bdevperf 00:12:31.355 ************************************ 00:12:31.355 13:25:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:31.355 13:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:31.355 13:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:31.355 13:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:31.355 13:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:31.355 13:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:31.355 13:25:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.355 13:25:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.355 { 00:12:31.355 "subsystems": [ 00:12:31.355 { 00:12:31.355 "subsystem": "bdev", 00:12:31.355 "config": [ 00:12:31.355 { 00:12:31.355 "params": { 00:12:31.355 "io_mechanism": "libaio", 00:12:31.355 "conserve_cpu": true, 00:12:31.355 "filename": "/dev/nvme0n1", 00:12:31.355 "name": "xnvme_bdev" 00:12:31.355 }, 00:12:31.355 "method": "bdev_xnvme_create" 00:12:31.355 }, 00:12:31.355 { 00:12:31.355 "method": "bdev_wait_for_examine" 00:12:31.355 } 00:12:31.355 ] 00:12:31.355 } 00:12:31.355 ] 00:12:31.355 } 00:12:31.355 [2024-11-26 13:25:19.691076] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:12:31.355 [2024-11-26 13:25:19.691221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69484 ] 00:12:31.355 [2024-11-26 13:25:19.856516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.617 [2024-11-26 13:25:19.977961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.879 Running I/O for 5 seconds... 00:12:33.769 35041.00 IOPS, 136.88 MiB/s [2024-11-26T13:25:23.728Z] 33208.00 IOPS, 129.72 MiB/s [2024-11-26T13:25:24.303Z] 32970.67 IOPS, 128.79 MiB/s [2024-11-26T13:25:25.692Z] 33041.50 IOPS, 129.07 MiB/s [2024-11-26T13:25:25.692Z] 32758.60 IOPS, 127.96 MiB/s 00:12:37.122 Latency(us) 00:12:37.122 [2024-11-26T13:25:25.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.122 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:37.122 xnvme_bdev : 5.01 32725.13 127.83 0.00 0.00 1950.58 239.46 10788.23 00:12:37.122 [2024-11-26T13:25:25.692Z] =================================================================================================================== 00:12:37.122 [2024-11-26T13:25:25.692Z] Total : 32725.13 127.83 0.00 0.00 1950.58 239.46 10788.23 00:12:37.696 13:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:37.696 13:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:37.696 13:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:37.696 13:25:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.696 13:25:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.696 { 00:12:37.696 "subsystems": [ 00:12:37.696 { 00:12:37.696 "subsystem": "bdev", 00:12:37.696 "config": [ 00:12:37.696 { 00:12:37.696 "params": { 00:12:37.696 "io_mechanism": "libaio", 00:12:37.696 "conserve_cpu": true, 00:12:37.696 "filename": "/dev/nvme0n1", 00:12:37.696 "name": "xnvme_bdev" 00:12:37.696 }, 00:12:37.696 "method": "bdev_xnvme_create" 00:12:37.696 }, 00:12:37.696 { 00:12:37.696 "method": "bdev_wait_for_examine" 00:12:37.696 } 00:12:37.696 ] 00:12:37.696 } 00:12:37.696 ] 00:12:37.696 } 00:12:37.696 [2024-11-26 13:25:26.249036] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:12:37.696 [2024-11-26 13:25:26.249181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69559 ] 00:12:37.959 [2024-11-26 13:25:26.411217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.221 [2024-11-26 13:25:26.558137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.484 Running I/O for 5 seconds... 00:12:40.376 32521.00 IOPS, 127.04 MiB/s [2024-11-26T13:25:30.329Z] 32374.50 IOPS, 126.46 MiB/s [2024-11-26T13:25:31.272Z] 32517.33 IOPS, 127.02 MiB/s [2024-11-26T13:25:32.219Z] 32822.00 IOPS, 128.21 MiB/s [2024-11-26T13:25:32.219Z] 32886.80 IOPS, 128.46 MiB/s 00:12:43.649 Latency(us) 00:12:43.649 [2024-11-26T13:25:32.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.649 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:43.649 xnvme_bdev : 5.00 32873.22 128.41 0.00 0.00 1942.85 422.20 5696.59 00:12:43.649 [2024-11-26T13:25:32.219Z] =================================================================================================================== 00:12:43.649 [2024-11-26T13:25:32.219Z] Total : 32873.22 128.41 0.00 0.00 1942.85 422.20 5696.59 00:12:44.222 00:12:44.222 real 0m13.171s 00:12:44.222 user 0m5.349s 00:12:44.222 sys 0m6.247s 00:12:44.222 13:25:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.484 13:25:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:44.484 ************************************ 00:12:44.484 END TEST xnvme_bdevperf 00:12:44.484 ************************************ 00:12:44.484 13:25:32 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:44.484 13:25:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:44.484 13:25:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.484 13:25:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.484 ************************************ 00:12:44.484 START TEST xnvme_fio_plugin 00:12:44.484 ************************************ 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:44.484 13:25:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.484 { 00:12:44.484 "subsystems": [ 00:12:44.484 { 00:12:44.484 "subsystem": "bdev", 00:12:44.484 "config": [ 00:12:44.484 { 00:12:44.484 "params": { 00:12:44.484 "io_mechanism": "libaio", 00:12:44.484 "conserve_cpu": true, 00:12:44.484 "filename": "/dev/nvme0n1", 00:12:44.484 "name": "xnvme_bdev" 00:12:44.484 }, 00:12:44.484 "method": "bdev_xnvme_create" 00:12:44.484 }, 00:12:44.484 { 00:12:44.484 "method": "bdev_wait_for_examine" 00:12:44.484 } 00:12:44.484 ] 00:12:44.484 } 00:12:44.484 ] 00:12:44.484 } 00:12:44.746 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:44.746 fio-3.35 00:12:44.746 Starting 1 thread 00:12:51.354 00:12:51.354 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69680: Tue Nov 26 13:25:38 2024 00:12:51.354 read: IOPS=33.3k, BW=130MiB/s (137MB/s)(651MiB/5001msec) 00:12:51.354 slat (usec): min=4, max=2349, avg=18.82, stdev=94.02 00:12:51.354 clat (usec): min=106, max=4896, avg=1402.96, stdev=519.99 00:12:51.354 lat (usec): min=196, max=4983, avg=1421.78, stdev=510.25 00:12:51.354 clat percentiles (usec): 00:12:51.354 | 1.00th=[ 297], 5.00th=[ 578], 10.00th=[ 758], 20.00th=[ 963], 00:12:51.354 | 30.00th=[ 1123], 40.00th=[ 1270], 50.00th=[ 1401], 60.00th=[ 1516], 00:12:51.354 | 70.00th=[ 1647], 80.00th=[ 1795], 90.00th=[ 2024], 95.00th=[ 2245], 00:12:51.354 | 99.00th=[ 2868], 99.50th=[ 3130], 99.90th=[ 3752], 99.95th=[ 3916], 00:12:51.354 | 99.99th=[ 4424] 00:12:51.354 bw ( KiB/s): min=124960, max=140440, per=99.44%, avg=132607.11, stdev=5007.57, samples=9 00:12:51.354 iops : min=31240, max=35110, avg=33151.78, stdev=1251.89, samples=9 00:12:51.354 lat (usec) : 250=0.53%, 500=2.95%, 750=6.30%, 1000=12.16% 00:12:51.354 lat (msec) : 2=67.10%, 4=10.92%, 10=0.04% 00:12:51.354 cpu : usr=49.92%, sys=42.50%, ctx=11, majf=0, minf=764 00:12:51.354 IO depths : 1=0.7%, 2=1.5%, 4=3.5%, 8=8.7%, 16=22.6%, 32=60.8%, >=64=2.1% 00:12:51.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.354 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:51.354 issued rwts: total=166726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:51.354 00:12:51.354 Run status group 0 (all jobs): 00:12:51.354 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=651MiB (683MB), run=5001-5001msec 00:12:51.354 ----------------------------------------------------- 00:12:51.354 Suppressions used: 00:12:51.354 count bytes template 00:12:51.354 1 11 /usr/src/fio/parse.c 00:12:51.354 1 8 libtcmalloc_minimal.so 00:12:51.354 1 904 libcrypto.so 00:12:51.354 ----------------------------------------------------- 00:12:51.354 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:51.354 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:51.355 13:25:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.355 { 00:12:51.355 "subsystems": [ 00:12:51.355 { 00:12:51.355 "subsystem": "bdev", 00:12:51.355 "config": [ 00:12:51.355 { 00:12:51.355 "params": { 00:12:51.355 "io_mechanism": "libaio", 00:12:51.355 "conserve_cpu": true, 00:12:51.355 "filename": "/dev/nvme0n1", 00:12:51.355 "name": "xnvme_bdev" 00:12:51.355 }, 00:12:51.355 "method": "bdev_xnvme_create" 00:12:51.355 }, 00:12:51.355 { 00:12:51.355 "method": "bdev_wait_for_examine" 00:12:51.355 } 00:12:51.355 ] 00:12:51.355 } 00:12:51.355 ] 00:12:51.355 } 00:12:51.616 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:51.616 fio-3.35 00:12:51.616 Starting 1 thread 00:12:58.206 00:12:58.206 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69771: Tue Nov 26 13:25:45 2024 00:12:58.206 write: IOPS=33.7k, BW=132MiB/s (138MB/s)(659MiB/5001msec); 0 zone resets 00:12:58.206 slat (usec): min=4, max=2178, avg=20.82, stdev=94.32 00:12:58.206 clat (usec): min=81, max=7809, avg=1328.78, stdev=538.94 00:12:58.206 lat (usec): min=214, max=7814, avg=1349.60, stdev=530.95 00:12:58.206 clat percentiles (usec): 00:12:58.206 | 1.00th=[ 289], 5.00th=[ 537], 10.00th=[ 693], 20.00th=[ 889], 00:12:58.206 | 30.00th=[ 1037], 40.00th=[ 1172], 50.00th=[ 1287], 60.00th=[ 1418], 00:12:58.206 | 70.00th=[ 1565], 80.00th=[ 1729], 90.00th=[ 1958], 95.00th=[ 2212], 00:12:58.206 | 99.00th=[ 2999], 99.50th=[ 3326], 99.90th=[ 4015], 99.95th=[ 4359], 00:12:58.206 | 99.99th=[ 6390] 00:12:58.206 bw ( KiB/s): min=129944, max=144488, per=99.97%, avg=134927.78, stdev=5471.24, samples=9 00:12:58.206 iops : min=32486, max=36122, avg=33731.89, stdev=1367.85, samples=9 00:12:58.206 lat (usec) : 100=0.01%, 250=0.58%, 500=3.68%, 750=8.31%, 1000=14.81% 00:12:58.206 lat (msec) : 2=63.66%, 4=8.85%, 10=0.10% 00:12:58.206 cpu : usr=44.06%, sys=47.14%, ctx=36, majf=0, minf=764 00:12:58.206 IO depths : 1=0.5%, 2=1.2%, 4=3.0%, 8=8.1%, 16=22.3%, 32=62.6%, >=64=2.2% 00:12:58.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.206 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:58.206 issued rwts: total=0,168747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.206 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:58.206 00:12:58.206 Run status group 0 (all jobs): 00:12:58.206 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=659MiB (691MB), run=5001-5001msec 00:12:58.467 ----------------------------------------------------- 00:12:58.467 Suppressions used: 00:12:58.467 count bytes template 00:12:58.467 1 11 /usr/src/fio/parse.c 00:12:58.467 1 8 libtcmalloc_minimal.so 00:12:58.467 1 904 libcrypto.so 00:12:58.467 ----------------------------------------------------- 00:12:58.467 00:12:58.467 00:12:58.467 real 0m14.008s 00:12:58.467 user 0m7.598s 00:12:58.467 sys 0m5.196s 00:12:58.467 13:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.467 ************************************ 00:12:58.467 END TEST xnvme_fio_plugin 00:12:58.467 ************************************ 00:12:58.467 13:25:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:58.467 13:25:46 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:58.467 13:25:46 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:58.467 13:25:46 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:58.467 13:25:46 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:58.467 13:25:46 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:58.467 13:25:46 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:58.467 13:25:46 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:58.467 13:25:46 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:58.467 13:25:46 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:58.467 13:25:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:58.467 13:25:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.467 13:25:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.467 ************************************ 00:12:58.467 START TEST xnvme_rpc 00:12:58.467 ************************************ 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69858 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69858 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69858 ']' 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.467 13:25:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:58.467 [2024-11-26 13:25:47.021229] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:12:58.467 [2024-11-26 13:25:47.021401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69858 ] 00:12:58.728 [2024-11-26 13:25:47.187592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.987 [2024-11-26 13:25:47.309521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.556 13:25:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.556 13:25:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:59.556 13:25:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:59.556 13:25:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.556 13:25:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.556 xnvme_bdev 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:59.556 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69858 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69858 ']' 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69858 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69858 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.817 killing process with pid 69858 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69858' 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69858 00:12:59.817 13:25:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69858 00:13:01.729 ************************************ 00:13:01.729 END TEST xnvme_rpc 00:13:01.729 ************************************ 00:13:01.729 00:13:01.729 real 0m2.880s 00:13:01.729 user 0m2.879s 00:13:01.729 sys 0m0.483s 00:13:01.729 13:25:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.729 13:25:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.729 13:25:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:01.729 13:25:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:01.729 13:25:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.729 13:25:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.729 ************************************ 00:13:01.729 START TEST xnvme_bdevperf 00:13:01.729 ************************************ 00:13:01.729 13:25:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:01.729 13:25:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:01.729 13:25:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:01.729 13:25:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:01.729 13:25:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:01.729 13:25:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:01.729 13:25:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.729 13:25:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.729 { 00:13:01.729 "subsystems": [ 00:13:01.729 { 00:13:01.729 "subsystem": "bdev", 00:13:01.729 "config": [ 00:13:01.729 { 00:13:01.729 "params": { 00:13:01.729 "io_mechanism": "io_uring", 00:13:01.729 "conserve_cpu": false, 00:13:01.729 "filename": "/dev/nvme0n1", 00:13:01.729 "name": "xnvme_bdev" 00:13:01.729 }, 00:13:01.729 "method": "bdev_xnvme_create" 00:13:01.730 }, 00:13:01.730 { 00:13:01.730 "method": "bdev_wait_for_examine" 00:13:01.730 } 00:13:01.730 ] 00:13:01.730 } 00:13:01.730 ] 00:13:01.730 } 00:13:01.730 [2024-11-26 13:25:49.964347] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:13:01.730 [2024-11-26 13:25:49.964498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69926 ] 00:13:01.730 [2024-11-26 13:25:50.125254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.730 [2024-11-26 13:25:50.249080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.991 Running I/O for 5 seconds... 00:13:04.318 32946.00 IOPS, 128.70 MiB/s [2024-11-26T13:25:53.830Z] 33840.50 IOPS, 132.19 MiB/s [2024-11-26T13:25:54.811Z] 34522.67 IOPS, 134.85 MiB/s [2024-11-26T13:25:55.922Z] 34938.50 IOPS, 136.48 MiB/s [2024-11-26T13:25:55.922Z] 34798.60 IOPS, 135.93 MiB/s 00:13:07.352 Latency(us) 00:13:07.352 [2024-11-26T13:25:55.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.352 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:07.352 xnvme_bdev : 5.00 34792.01 135.91 0.00 0.00 1835.85 356.04 8620.50 00:13:07.352 [2024-11-26T13:25:55.922Z] =================================================================================================================== 00:13:07.352 [2024-11-26T13:25:55.922Z] Total : 34792.01 135.91 0.00 0.00 1835.85 356.04 8620.50 00:13:07.978 13:25:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:07.978 13:25:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:07.978 13:25:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:07.978 13:25:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:07.978 13:25:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.978 { 00:13:07.978 "subsystems": [ 00:13:07.978 { 00:13:07.978 "subsystem": "bdev", 00:13:07.978 "config": [ 00:13:07.978 { 00:13:07.978 "params": { 00:13:07.978 "io_mechanism": "io_uring", 00:13:07.978 "conserve_cpu": false, 00:13:07.978 "filename": "/dev/nvme0n1", 00:13:07.978 "name": "xnvme_bdev" 00:13:07.978 }, 00:13:07.978 "method": "bdev_xnvme_create" 00:13:07.978 }, 00:13:07.978 { 00:13:07.978 "method": "bdev_wait_for_examine" 00:13:07.978 } 00:13:07.978 ] 00:13:07.978 } 00:13:07.978 ] 00:13:07.978 } 00:13:07.979 [2024-11-26 13:25:56.385426] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:13:07.979 [2024-11-26 13:25:56.385597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70007 ] 00:13:08.276 [2024-11-26 13:25:56.551389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.276 [2024-11-26 13:25:56.668847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.539 Running I/O for 5 seconds... 00:13:10.425 32982.00 IOPS, 128.84 MiB/s [2024-11-26T13:26:00.381Z] 33119.00 IOPS, 129.37 MiB/s [2024-11-26T13:26:01.326Z] 33214.33 IOPS, 129.74 MiB/s [2024-11-26T13:26:02.268Z] 33498.25 IOPS, 130.85 MiB/s [2024-11-26T13:26:02.268Z] 33447.40 IOPS, 130.65 MiB/s 00:13:13.698 Latency(us) 00:13:13.698 [2024-11-26T13:26:02.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.698 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:13.698 xnvme_bdev : 5.00 33446.78 130.65 0.00 0.00 1909.62 354.46 6755.25 00:13:13.698 [2024-11-26T13:26:02.268Z] =================================================================================================================== 00:13:13.698 [2024-11-26T13:26:02.268Z] Total : 33446.78 130.65 0.00 0.00 1909.62 354.46 6755.25 00:13:14.273 00:13:14.273 real 0m12.855s 00:13:14.273 user 0m6.104s 00:13:14.273 sys 0m6.496s 00:13:14.273 13:26:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.273 13:26:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:14.273 ************************************ 00:13:14.273 END TEST xnvme_bdevperf 00:13:14.273 ************************************ 00:13:14.273 13:26:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:14.273 13:26:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.273 13:26:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.273 13:26:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.273 ************************************ 00:13:14.273 START TEST xnvme_fio_plugin 00:13:14.273 ************************************ 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:14.273 13:26:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.273 { 00:13:14.273 "subsystems": [ 00:13:14.273 { 00:13:14.273 "subsystem": "bdev", 00:13:14.273 "config": [ 00:13:14.273 { 00:13:14.273 "params": { 00:13:14.273 "io_mechanism": "io_uring", 00:13:14.273 "conserve_cpu": false, 00:13:14.273 "filename": "/dev/nvme0n1", 00:13:14.273 "name": "xnvme_bdev" 00:13:14.273 }, 00:13:14.273 "method": "bdev_xnvme_create" 00:13:14.273 }, 00:13:14.273 { 00:13:14.273 "method": "bdev_wait_for_examine" 00:13:14.273 } 00:13:14.273 ] 00:13:14.273 } 00:13:14.273 ] 00:13:14.273 } 00:13:14.535 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:14.535 fio-3.35 00:13:14.535 Starting 1 thread 00:13:21.130 00:13:21.130 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70127: Tue Nov 26 13:26:08 2024 00:13:21.130 read: IOPS=35.3k, BW=138MiB/s (145MB/s)(689MiB/5001msec) 00:13:21.130 slat (usec): min=2, max=250, avg= 3.12, stdev= 2.21 00:13:21.130 clat (usec): min=699, max=9174, avg=1687.92, stdev=343.34 00:13:21.130 lat (usec): min=702, max=9177, avg=1691.04, stdev=343.58 00:13:21.130 clat percentiles (usec): 00:13:21.130 | 1.00th=[ 1172], 5.00th=[ 1270], 10.00th=[ 1336], 20.00th=[ 1418], 00:13:21.130 | 30.00th=[ 1500], 40.00th=[ 1565], 50.00th=[ 1647], 60.00th=[ 1713], 00:13:21.130 | 70.00th=[ 1811], 80.00th=[ 1926], 90.00th=[ 2089], 95.00th=[ 2245], 00:13:21.130 | 99.00th=[ 2638], 99.50th=[ 2769], 99.90th=[ 4228], 99.95th=[ 4555], 00:13:21.130 | 99.99th=[ 9110] 00:13:21.130 bw ( KiB/s): min=124678, max=153600, per=100.00%, avg=141198.89, stdev=11167.29, samples=9 00:13:21.130 iops : min=31169, max=38400, avg=35299.67, stdev=2791.92, samples=9 00:13:21.130 lat (usec) : 750=0.01%, 1000=0.01% 00:13:21.130 lat (msec) : 2=85.64%, 4=14.23%, 10=0.11% 00:13:21.130 cpu : usr=29.70%, sys=68.88%, ctx=19, majf=0, minf=762 00:13:21.130 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:21.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:21.130 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:21.130 issued rwts: total=176509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:21.130 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:21.130 00:13:21.130 Run status group 0 (all jobs): 00:13:21.130 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=689MiB (723MB), run=5001-5001msec 00:13:21.130 ----------------------------------------------------- 00:13:21.130 Suppressions used: 00:13:21.130 count bytes template 00:13:21.130 1 11 /usr/src/fio/parse.c 00:13:21.130 1 8 libtcmalloc_minimal.so 00:13:21.130 1 904 libcrypto.so 00:13:21.131 ----------------------------------------------------- 00:13:21.131 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:21.131 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:21.392 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:21.392 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:21.392 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:21.392 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:21.392 13:26:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:21.392 { 00:13:21.392 "subsystems": [ 00:13:21.392 { 00:13:21.392 "subsystem": "bdev", 00:13:21.392 "config": [ 00:13:21.392 { 00:13:21.392 "params": { 00:13:21.392 "io_mechanism": "io_uring", 00:13:21.392 "conserve_cpu": false, 00:13:21.392 "filename": "/dev/nvme0n1", 00:13:21.392 "name": "xnvme_bdev" 00:13:21.392 }, 00:13:21.392 "method": "bdev_xnvme_create" 00:13:21.392 }, 00:13:21.392 { 00:13:21.392 "method": "bdev_wait_for_examine" 00:13:21.392 } 00:13:21.392 ] 00:13:21.392 } 00:13:21.392 ] 00:13:21.392 } 00:13:21.392 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:21.392 fio-3.35 00:13:21.392 Starting 1 thread 00:13:27.983 00:13:27.983 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70214: Tue Nov 26 13:26:15 2024 00:13:27.983 write: IOPS=32.4k, BW=126MiB/s (133MB/s)(632MiB/5001msec); 0 zone resets 00:13:27.983 slat (usec): min=2, max=194, avg= 3.45, stdev= 2.23 00:13:27.983 clat (usec): min=431, max=8816, avg=1836.82, stdev=333.25 00:13:27.983 lat (usec): min=452, max=8819, avg=1840.27, stdev=333.48 00:13:27.983 clat percentiles (usec): 00:13:27.983 | 1.00th=[ 1221], 5.00th=[ 1369], 10.00th=[ 1450], 20.00th=[ 1565], 00:13:27.983 | 30.00th=[ 1647], 40.00th=[ 1729], 50.00th=[ 1811], 60.00th=[ 1876], 00:13:27.983 | 70.00th=[ 1975], 80.00th=[ 2089], 90.00th=[ 2245], 95.00th=[ 2409], 00:13:27.983 | 99.00th=[ 2737], 99.50th=[ 2933], 99.90th=[ 3425], 99.95th=[ 3589], 00:13:27.983 | 99.99th=[ 6849] 00:13:27.983 bw ( KiB/s): min=125944, max=132024, per=100.00%, avg=129879.67, stdev=2140.53, samples=9 00:13:27.983 iops : min=31486, max=33006, avg=32469.89, stdev=535.14, samples=9 00:13:27.983 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.04% 00:13:27.983 lat (msec) : 2=72.08%, 4=27.81%, 10=0.03% 00:13:27.983 cpu : usr=30.54%, sys=67.66%, ctx=28, majf=0, minf=762 00:13:27.983 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:27.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.983 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:27.983 issued rwts: total=0,161891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.983 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.983 00:13:27.983 Run status group 0 (all jobs): 00:13:27.983 WRITE: bw=126MiB/s (133MB/s), 126MiB/s-126MiB/s (133MB/s-133MB/s), io=632MiB (663MB), run=5001-5001msec 00:13:27.983 ----------------------------------------------------- 00:13:27.983 Suppressions used: 00:13:27.983 count bytes template 00:13:27.983 1 11 /usr/src/fio/parse.c 00:13:27.983 1 8 libtcmalloc_minimal.so 00:13:27.983 1 904 libcrypto.so 00:13:27.983 ----------------------------------------------------- 00:13:27.983 00:13:28.246 00:13:28.246 real 0m13.763s 00:13:28.246 user 0m5.866s 00:13:28.246 sys 0m7.397s 00:13:28.246 13:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.246 ************************************ 00:13:28.246 END TEST xnvme_fio_plugin 00:13:28.246 ************************************ 00:13:28.246 13:26:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:28.246 13:26:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:28.246 13:26:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:28.246 13:26:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:28.246 13:26:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:28.246 13:26:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:28.246 13:26:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.246 13:26:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.246 ************************************ 00:13:28.246 START TEST xnvme_rpc 00:13:28.246 ************************************ 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70304 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70304 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70304 ']' 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.246 13:26:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.246 [2024-11-26 13:26:16.726015] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:13:28.246 [2024-11-26 13:26:16.726168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70304 ] 00:13:28.508 [2024-11-26 13:26:16.892183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.508 [2024-11-26 13:26:17.010056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.452 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.452 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.453 xnvme_bdev 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70304 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70304 ']' 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70304 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70304 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.453 killing process with pid 70304 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70304' 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70304 00:13:29.453 13:26:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70304 00:13:31.375 00:13:31.375 real 0m2.865s 00:13:31.375 user 0m2.879s 00:13:31.375 sys 0m0.449s 00:13:31.375 13:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.375 13:26:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.375 ************************************ 00:13:31.375 END TEST xnvme_rpc 00:13:31.375 ************************************ 00:13:31.375 13:26:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:31.375 13:26:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:31.375 13:26:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.375 13:26:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.375 ************************************ 00:13:31.375 START TEST xnvme_bdevperf 00:13:31.375 ************************************ 00:13:31.375 13:26:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:31.375 13:26:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:31.375 13:26:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:31.375 13:26:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:31.375 13:26:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:31.375 13:26:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:31.375 13:26:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:31.375 13:26:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.375 { 00:13:31.375 "subsystems": [ 00:13:31.375 { 00:13:31.375 "subsystem": "bdev", 00:13:31.375 "config": [ 00:13:31.375 { 00:13:31.375 "params": { 00:13:31.375 "io_mechanism": "io_uring", 00:13:31.375 "conserve_cpu": true, 00:13:31.375 "filename": "/dev/nvme0n1", 00:13:31.375 "name": "xnvme_bdev" 00:13:31.375 }, 00:13:31.375 "method": "bdev_xnvme_create" 00:13:31.375 }, 00:13:31.375 { 00:13:31.375 "method": "bdev_wait_for_examine" 00:13:31.375 } 00:13:31.375 ] 00:13:31.375 } 00:13:31.375 ] 00:13:31.375 } 00:13:31.375 [2024-11-26 13:26:19.645979] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:13:31.375 [2024-11-26 13:26:19.646120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70368 ] 00:13:31.375 [2024-11-26 13:26:19.812019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.636 [2024-11-26 13:26:19.940202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.898 Running I/O for 5 seconds... 00:13:33.788 32610.00 IOPS, 127.38 MiB/s [2024-11-26T13:26:23.303Z] 32486.50 IOPS, 126.90 MiB/s [2024-11-26T13:26:24.246Z] 32395.00 IOPS, 126.54 MiB/s [2024-11-26T13:26:25.634Z] 32444.50 IOPS, 126.74 MiB/s [2024-11-26T13:26:25.634Z] 32510.40 IOPS, 126.99 MiB/s 00:13:37.064 Latency(us) 00:13:37.064 [2024-11-26T13:26:25.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.064 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:37.064 xnvme_bdev : 5.00 32507.54 126.98 0.00 0.00 1965.04 869.61 12603.08 00:13:37.064 [2024-11-26T13:26:25.634Z] =================================================================================================================== 00:13:37.064 [2024-11-26T13:26:25.634Z] Total : 32507.54 126.98 0.00 0.00 1965.04 869.61 12603.08 00:13:37.637 13:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.637 13:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:37.637 13:26:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:37.637 13:26:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:37.637 13:26:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:37.637 { 00:13:37.637 "subsystems": [ 00:13:37.637 { 00:13:37.637 "subsystem": "bdev", 00:13:37.637 "config": [ 00:13:37.637 { 00:13:37.637 "params": { 00:13:37.637 "io_mechanism": "io_uring", 00:13:37.637 "conserve_cpu": true, 00:13:37.637 "filename": "/dev/nvme0n1", 00:13:37.637 "name": "xnvme_bdev" 00:13:37.637 }, 00:13:37.637 "method": "bdev_xnvme_create" 00:13:37.637 }, 00:13:37.637 { 00:13:37.637 "method": "bdev_wait_for_examine" 00:13:37.637 } 00:13:37.638 ] 00:13:37.638 } 00:13:37.638 ] 00:13:37.638 } 00:13:37.638 [2024-11-26 13:26:26.075670] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:13:37.638 [2024-11-26 13:26:26.075812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70452 ] 00:13:37.899 [2024-11-26 13:26:26.232029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.899 [2024-11-26 13:26:26.349213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.161 Running I/O for 5 seconds... 00:13:40.495 33562.00 IOPS, 131.10 MiB/s [2024-11-26T13:26:29.645Z] 33960.00 IOPS, 132.66 MiB/s [2024-11-26T13:26:31.033Z] 34021.67 IOPS, 132.90 MiB/s [2024-11-26T13:26:31.978Z] 33742.25 IOPS, 131.81 MiB/s [2024-11-26T13:26:31.978Z] 33640.20 IOPS, 131.41 MiB/s 00:13:43.408 Latency(us) 00:13:43.408 [2024-11-26T13:26:31.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.408 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:43.408 xnvme_bdev : 5.00 33620.50 131.33 0.00 0.00 1899.16 730.98 8973.39 00:13:43.408 [2024-11-26T13:26:31.978Z] =================================================================================================================== 00:13:43.408 [2024-11-26T13:26:31.978Z] Total : 33620.50 131.33 0.00 0.00 1899.16 730.98 8973.39 00:13:43.981 00:13:43.981 real 0m12.840s 00:13:43.981 user 0m8.738s 00:13:43.981 sys 0m3.543s 00:13:43.981 ************************************ 00:13:43.981 END TEST xnvme_bdevperf 00:13:43.981 ************************************ 00:13:43.981 13:26:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.981 13:26:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:43.981 13:26:32 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:43.981 13:26:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.981 13:26:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.981 13:26:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.981 ************************************ 00:13:43.981 START TEST xnvme_fio_plugin 00:13:43.981 ************************************ 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:43.981 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:43.982 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:43.982 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:43.982 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:43.982 13:26:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.982 { 00:13:43.982 "subsystems": [ 00:13:43.982 { 00:13:43.982 "subsystem": "bdev", 00:13:43.982 "config": [ 00:13:43.982 { 00:13:43.982 "params": { 00:13:43.982 "io_mechanism": "io_uring", 00:13:43.982 "conserve_cpu": true, 00:13:43.982 "filename": "/dev/nvme0n1", 00:13:43.982 "name": "xnvme_bdev" 00:13:43.982 }, 00:13:43.982 "method": "bdev_xnvme_create" 00:13:43.982 }, 00:13:43.982 { 00:13:43.982 "method": "bdev_wait_for_examine" 00:13:43.982 } 00:13:43.982 ] 00:13:43.982 } 00:13:43.982 ] 00:13:43.982 } 00:13:44.243 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:44.243 fio-3.35 00:13:44.243 Starting 1 thread 00:13:50.833 00:13:50.833 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70571: Tue Nov 26 13:26:38 2024 00:13:50.833 read: IOPS=33.6k, BW=131MiB/s (137MB/s)(656MiB/5001msec) 00:13:50.833 slat (nsec): min=2714, max=91632, avg=3216.77, stdev=1636.84 00:13:50.833 clat (usec): min=990, max=3603, avg=1774.97, stdev=283.07 00:13:50.833 lat (usec): min=993, max=3607, avg=1778.18, stdev=283.36 00:13:50.833 clat percentiles (usec): 00:13:50.833 | 1.00th=[ 1221], 5.00th=[ 1352], 10.00th=[ 1434], 20.00th=[ 1532], 00:13:50.833 | 30.00th=[ 1614], 40.00th=[ 1680], 50.00th=[ 1745], 60.00th=[ 1827], 00:13:50.833 | 70.00th=[ 1909], 80.00th=[ 1991], 90.00th=[ 2147], 95.00th=[ 2278], 00:13:50.833 | 99.00th=[ 2540], 99.50th=[ 2638], 99.90th=[ 2966], 99.95th=[ 3228], 00:13:50.833 | 99.99th=[ 3556] 00:13:50.833 bw ( KiB/s): min=127488, max=147456, per=100.00%, avg=134997.33, stdev=5798.27, samples=9 00:13:50.833 iops : min=31872, max=36864, avg=33749.33, stdev=1449.57, samples=9 00:13:50.833 lat (usec) : 1000=0.01% 00:13:50.833 lat (msec) : 2=80.14%, 4=19.86% 00:13:50.833 cpu : usr=67.14%, sys=29.32%, ctx=6, majf=0, minf=762 00:13:50.833 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:50.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.833 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:50.833 issued rwts: total=167872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.833 00:13:50.833 Run status group 0 (all jobs): 00:13:50.833 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=656MiB (688MB), run=5001-5001msec 00:13:51.094 ----------------------------------------------------- 00:13:51.094 Suppressions used: 00:13:51.094 count bytes template 00:13:51.094 1 11 /usr/src/fio/parse.c 00:13:51.094 1 8 libtcmalloc_minimal.so 00:13:51.094 1 904 libcrypto.so 00:13:51.094 ----------------------------------------------------- 00:13:51.094 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:51.094 13:26:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.094 { 00:13:51.094 "subsystems": [ 00:13:51.094 { 00:13:51.094 "subsystem": "bdev", 00:13:51.094 "config": [ 00:13:51.094 { 00:13:51.094 "params": { 00:13:51.095 "io_mechanism": "io_uring", 00:13:51.095 "conserve_cpu": true, 00:13:51.095 "filename": "/dev/nvme0n1", 00:13:51.095 "name": "xnvme_bdev" 00:13:51.095 }, 00:13:51.095 "method": "bdev_xnvme_create" 00:13:51.095 }, 00:13:51.095 { 00:13:51.095 "method": "bdev_wait_for_examine" 00:13:51.095 } 00:13:51.095 ] 00:13:51.095 } 00:13:51.095 ] 00:13:51.095 } 00:13:51.095 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:51.095 fio-3.35 00:13:51.095 Starting 1 thread 00:13:57.734 00:13:57.734 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70663: Tue Nov 26 13:26:45 2024 00:13:57.734 write: IOPS=33.4k, BW=131MiB/s (137MB/s)(653MiB/5002msec); 0 zone resets 00:13:57.734 slat (nsec): min=2782, max=64454, avg=3626.63, stdev=1705.03 00:13:57.734 clat (usec): min=431, max=8995, avg=1768.14, stdev=267.92 00:13:57.734 lat (usec): min=440, max=8999, avg=1771.77, stdev=268.12 00:13:57.734 clat percentiles (usec): 00:13:57.734 | 1.00th=[ 1319], 5.00th=[ 1418], 10.00th=[ 1483], 20.00th=[ 1565], 00:13:57.734 | 30.00th=[ 1614], 40.00th=[ 1680], 50.00th=[ 1745], 60.00th=[ 1795], 00:13:57.734 | 70.00th=[ 1876], 80.00th=[ 1958], 90.00th=[ 2089], 95.00th=[ 2212], 00:13:57.734 | 99.00th=[ 2474], 99.50th=[ 2638], 99.90th=[ 3195], 99.95th=[ 5669], 00:13:57.734 | 99.99th=[ 6063] 00:13:57.734 bw ( KiB/s): min=130960, max=137112, per=99.70%, avg=133360.89, stdev=1962.14, samples=9 00:13:57.734 iops : min=32740, max=34278, avg=33340.22, stdev=490.53, samples=9 00:13:57.734 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:13:57.734 lat (msec) : 2=83.78%, 4=16.13%, 10=0.06% 00:13:57.734 cpu : usr=62.85%, sys=33.65%, ctx=12, majf=0, minf=762 00:13:57.734 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:57.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.734 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:57.734 issued rwts: total=0,167273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:57.734 00:13:57.734 Run status group 0 (all jobs): 00:13:57.734 WRITE: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=653MiB (685MB), run=5002-5002msec 00:13:57.995 ----------------------------------------------------- 00:13:57.995 Suppressions used: 00:13:57.995 count bytes template 00:13:57.995 1 11 /usr/src/fio/parse.c 00:13:57.995 1 8 libtcmalloc_minimal.so 00:13:57.995 1 904 libcrypto.so 00:13:57.995 ----------------------------------------------------- 00:13:57.995 00:13:57.995 ************************************ 00:13:57.995 END TEST xnvme_fio_plugin 00:13:57.995 ************************************ 00:13:57.995 00:13:57.995 real 0m13.922s 00:13:57.995 user 0m9.434s 00:13:57.995 sys 0m3.797s 00:13:57.995 13:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.995 13:26:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:57.995 13:26:46 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:57.995 13:26:46 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:57.995 13:26:46 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:57.995 13:26:46 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:57.995 13:26:46 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:57.995 13:26:46 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:57.995 13:26:46 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:57.995 13:26:46 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:57.995 13:26:46 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:57.995 13:26:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.995 13:26:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.995 13:26:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.995 ************************************ 00:13:57.995 START TEST xnvme_rpc 00:13:57.995 ************************************ 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70755 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70755 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70755 ']' 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.995 13:26:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:58.256 [2024-11-26 13:26:46.561742] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:13:58.256 [2024-11-26 13:26:46.561883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70755 ] 00:13:58.256 [2024-11-26 13:26:46.726577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.517 [2024-11-26 13:26:46.845901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.090 xnvme_bdev 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.090 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70755 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70755 ']' 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70755 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70755 00:13:59.352 killing process with pid 70755 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70755' 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70755 00:13:59.352 13:26:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70755 00:14:01.271 00:14:01.271 real 0m2.879s 00:14:01.271 user 0m2.866s 00:14:01.271 sys 0m0.485s 00:14:01.271 13:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.271 ************************************ 00:14:01.271 END TEST xnvme_rpc 00:14:01.271 ************************************ 00:14:01.271 13:26:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.271 13:26:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:01.272 13:26:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:01.272 13:26:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.272 13:26:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.272 ************************************ 00:14:01.272 START TEST xnvme_bdevperf 00:14:01.272 ************************************ 00:14:01.272 13:26:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:01.272 13:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:01.272 13:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:01.272 13:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:01.272 13:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:01.272 13:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:01.272 13:26:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:01.272 13:26:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:01.272 { 00:14:01.272 "subsystems": [ 00:14:01.272 { 00:14:01.272 "subsystem": "bdev", 00:14:01.272 "config": [ 00:14:01.272 { 00:14:01.272 "params": { 00:14:01.272 "io_mechanism": "io_uring_cmd", 00:14:01.272 "conserve_cpu": false, 00:14:01.272 "filename": "/dev/ng0n1", 00:14:01.272 "name": "xnvme_bdev" 00:14:01.272 }, 00:14:01.272 "method": "bdev_xnvme_create" 00:14:01.272 }, 00:14:01.272 { 00:14:01.272 "method": "bdev_wait_for_examine" 00:14:01.272 } 00:14:01.272 ] 00:14:01.272 } 00:14:01.272 ] 00:14:01.272 } 00:14:01.272 [2024-11-26 13:26:49.494147] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:01.272 [2024-11-26 13:26:49.494287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70818 ] 00:14:01.272 [2024-11-26 13:26:49.657531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.272 [2024-11-26 13:26:49.777558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.534 Running I/O for 5 seconds... 00:14:03.867 34367.00 IOPS, 134.25 MiB/s [2024-11-26T13:26:53.383Z] 34815.50 IOPS, 136.00 MiB/s [2024-11-26T13:26:54.329Z] 34410.33 IOPS, 134.42 MiB/s [2024-11-26T13:26:55.273Z] 34271.50 IOPS, 133.87 MiB/s [2024-11-26T13:26:55.273Z] 34171.80 IOPS, 133.48 MiB/s 00:14:06.703 Latency(us) 00:14:06.703 [2024-11-26T13:26:55.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.703 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:06.703 xnvme_bdev : 5.01 34149.22 133.40 0.00 0.00 1870.66 340.28 7763.50 00:14:06.703 [2024-11-26T13:26:55.273Z] =================================================================================================================== 00:14:06.703 [2024-11-26T13:26:55.273Z] Total : 34149.22 133.40 0.00 0.00 1870.66 340.28 7763.50 00:14:07.276 13:26:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.276 13:26:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:07.276 13:26:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.276 13:26:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.276 13:26:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.537 { 00:14:07.537 "subsystems": [ 00:14:07.537 { 00:14:07.537 "subsystem": "bdev", 00:14:07.537 "config": [ 00:14:07.537 { 00:14:07.537 "params": { 00:14:07.537 "io_mechanism": "io_uring_cmd", 00:14:07.537 "conserve_cpu": false, 00:14:07.538 "filename": "/dev/ng0n1", 00:14:07.538 "name": "xnvme_bdev" 00:14:07.538 }, 00:14:07.538 "method": "bdev_xnvme_create" 00:14:07.538 }, 00:14:07.538 { 00:14:07.538 "method": "bdev_wait_for_examine" 00:14:07.538 } 00:14:07.538 ] 00:14:07.538 } 00:14:07.538 ] 00:14:07.538 } 00:14:07.538 [2024-11-26 13:26:55.917384] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:07.538 [2024-11-26 13:26:55.917778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70898 ] 00:14:07.538 [2024-11-26 13:26:56.083468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.798 [2024-11-26 13:26:56.202628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.060 Running I/O for 5 seconds... 00:14:09.950 38445.00 IOPS, 150.18 MiB/s [2024-11-26T13:26:59.906Z] 38222.00 IOPS, 149.30 MiB/s [2024-11-26T13:27:00.849Z] 38430.67 IOPS, 150.12 MiB/s [2024-11-26T13:27:01.793Z] 37383.75 IOPS, 146.03 MiB/s 00:14:13.223 Latency(us) 00:14:13.223 [2024-11-26T13:27:01.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.223 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:13.223 xnvme_bdev : 5.00 36778.66 143.67 0.00 0.00 1736.73 341.86 6503.19 00:14:13.223 [2024-11-26T13:27:01.793Z] =================================================================================================================== 00:14:13.223 [2024-11-26T13:27:01.793Z] Total : 36778.66 143.67 0.00 0.00 1736.73 341.86 6503.19 00:14:13.795 13:27:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.796 13:27:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:13.796 13:27:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:13.796 13:27:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:13.796 13:27:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.796 { 00:14:13.796 "subsystems": [ 00:14:13.796 { 00:14:13.796 "subsystem": "bdev", 00:14:13.796 "config": [ 00:14:13.796 { 00:14:13.796 "params": { 00:14:13.796 "io_mechanism": "io_uring_cmd", 00:14:13.796 "conserve_cpu": false, 00:14:13.796 "filename": "/dev/ng0n1", 00:14:13.796 "name": "xnvme_bdev" 00:14:13.796 }, 00:14:13.796 "method": "bdev_xnvme_create" 00:14:13.796 }, 00:14:13.796 { 00:14:13.796 "method": "bdev_wait_for_examine" 00:14:13.796 } 00:14:13.796 ] 00:14:13.796 } 00:14:13.796 ] 00:14:13.796 } 00:14:13.796 [2024-11-26 13:27:02.337782] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:13.796 [2024-11-26 13:27:02.337930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70972 ] 00:14:14.056 [2024-11-26 13:27:02.503583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.317 [2024-11-26 13:27:02.623880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.578 Running I/O for 5 seconds... 00:14:16.464 78464.00 IOPS, 306.50 MiB/s [2024-11-26T13:27:05.979Z] 78880.00 IOPS, 308.12 MiB/s [2024-11-26T13:27:06.922Z] 82709.33 IOPS, 323.08 MiB/s [2024-11-26T13:27:08.308Z] 86384.00 IOPS, 337.44 MiB/s [2024-11-26T13:27:08.308Z] 88537.60 IOPS, 345.85 MiB/s 00:14:19.738 Latency(us) 00:14:19.738 [2024-11-26T13:27:08.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.738 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:19.738 xnvme_bdev : 5.00 88501.83 345.71 0.00 0.00 719.78 463.16 2457.60 00:14:19.738 [2024-11-26T13:27:08.308Z] =================================================================================================================== 00:14:19.738 [2024-11-26T13:27:08.308Z] Total : 88501.83 345.71 0.00 0.00 719.78 463.16 2457.60 00:14:19.999 13:27:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:19.999 13:27:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:19.999 13:27:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:19.999 13:27:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:19.999 13:27:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.999 { 00:14:19.999 "subsystems": [ 00:14:19.999 { 00:14:19.999 "subsystem": "bdev", 00:14:19.999 "config": [ 00:14:19.999 { 00:14:19.999 "params": { 00:14:19.999 "io_mechanism": "io_uring_cmd", 00:14:19.999 "conserve_cpu": false, 00:14:20.000 "filename": "/dev/ng0n1", 00:14:20.000 "name": "xnvme_bdev" 00:14:20.000 }, 00:14:20.000 "method": "bdev_xnvme_create" 00:14:20.000 }, 00:14:20.000 { 00:14:20.000 "method": "bdev_wait_for_examine" 00:14:20.000 } 00:14:20.000 ] 00:14:20.000 } 00:14:20.000 ] 00:14:20.000 } 00:14:20.000 [2024-11-26 13:27:08.521583] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:20.000 [2024-11-26 13:27:08.521690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71046 ] 00:14:20.260 [2024-11-26 13:27:08.678791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.260 [2024-11-26 13:27:08.758204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.522 Running I/O for 5 seconds... 00:14:22.410 59384.00 IOPS, 231.97 MiB/s [2024-11-26T13:27:12.367Z] 53399.50 IOPS, 208.59 MiB/s [2024-11-26T13:27:13.305Z] 48820.33 IOPS, 190.70 MiB/s [2024-11-26T13:27:14.247Z] 47044.75 IOPS, 183.77 MiB/s [2024-11-26T13:27:14.247Z] 45644.80 IOPS, 178.30 MiB/s 00:14:25.677 Latency(us) 00:14:25.677 [2024-11-26T13:27:14.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.677 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:25.677 xnvme_bdev : 5.00 45635.36 178.26 0.00 0.00 1398.77 114.22 32465.53 00:14:25.677 [2024-11-26T13:27:14.247Z] =================================================================================================================== 00:14:25.677 [2024-11-26T13:27:14.247Z] Total : 45635.36 178.26 0.00 0.00 1398.77 114.22 32465.53 00:14:26.265 00:14:26.265 real 0m25.380s 00:14:26.265 user 0m14.051s 00:14:26.265 sys 0m10.835s 00:14:26.265 13:27:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.265 ************************************ 00:14:26.266 END TEST xnvme_bdevperf 00:14:26.266 ************************************ 00:14:26.266 13:27:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:26.528 13:27:14 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:26.528 13:27:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.528 13:27:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.528 13:27:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.528 ************************************ 00:14:26.528 START TEST xnvme_fio_plugin 00:14:26.528 ************************************ 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:26.528 13:27:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.528 { 00:14:26.528 "subsystems": [ 00:14:26.528 { 00:14:26.528 "subsystem": "bdev", 00:14:26.528 "config": [ 00:14:26.528 { 00:14:26.528 "params": { 00:14:26.528 "io_mechanism": "io_uring_cmd", 00:14:26.528 "conserve_cpu": false, 00:14:26.528 "filename": "/dev/ng0n1", 00:14:26.528 "name": "xnvme_bdev" 00:14:26.528 }, 00:14:26.528 "method": "bdev_xnvme_create" 00:14:26.528 }, 00:14:26.528 { 00:14:26.528 "method": "bdev_wait_for_examine" 00:14:26.528 } 00:14:26.528 ] 00:14:26.528 } 00:14:26.528 ] 00:14:26.528 } 00:14:26.528 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:26.528 fio-3.35 00:14:26.528 Starting 1 thread 00:14:33.128 00:14:33.128 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71159: Tue Nov 26 13:27:20 2024 00:14:33.128 read: IOPS=35.2k, BW=138MiB/s (144MB/s)(688MiB/5002msec) 00:14:33.128 slat (nsec): min=2722, max=89291, avg=3544.35, stdev=2057.93 00:14:33.128 clat (usec): min=925, max=3488, avg=1672.14, stdev=314.55 00:14:33.128 lat (usec): min=927, max=3518, avg=1675.69, stdev=314.92 00:14:33.128 clat percentiles (usec): 00:14:33.128 | 1.00th=[ 1106], 5.00th=[ 1221], 10.00th=[ 1303], 20.00th=[ 1401], 00:14:33.128 | 30.00th=[ 1483], 40.00th=[ 1565], 50.00th=[ 1631], 60.00th=[ 1713], 00:14:33.128 | 70.00th=[ 1811], 80.00th=[ 1926], 90.00th=[ 2089], 95.00th=[ 2245], 00:14:33.128 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 2966], 99.95th=[ 3097], 00:14:33.128 | 99.99th=[ 3359] 00:14:33.128 bw ( KiB/s): min=133632, max=157184, per=99.72%, avg=140458.67, stdev=7827.91, samples=9 00:14:33.128 iops : min=33408, max=39296, avg=35114.67, stdev=1956.98, samples=9 00:14:33.128 lat (usec) : 1000=0.07% 00:14:33.128 lat (msec) : 2=85.22%, 4=14.72% 00:14:33.128 cpu : usr=35.13%, sys=63.57%, ctx=8, majf=0, minf=762 00:14:33.128 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:33.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.128 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:33.128 issued rwts: total=176128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.128 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:33.128 00:14:33.128 Run status group 0 (all jobs): 00:14:33.128 READ: bw=138MiB/s (144MB/s), 138MiB/s-138MiB/s (144MB/s-144MB/s), io=688MiB (721MB), run=5002-5002msec 00:14:33.390 ----------------------------------------------------- 00:14:33.390 Suppressions used: 00:14:33.390 count bytes template 00:14:33.390 1 11 /usr/src/fio/parse.c 00:14:33.390 1 8 libtcmalloc_minimal.so 00:14:33.390 1 904 libcrypto.so 00:14:33.390 ----------------------------------------------------- 00:14:33.390 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:33.390 13:27:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.390 { 00:14:33.390 "subsystems": [ 00:14:33.390 { 00:14:33.390 "subsystem": "bdev", 00:14:33.390 "config": [ 00:14:33.390 { 00:14:33.390 "params": { 00:14:33.390 "io_mechanism": "io_uring_cmd", 00:14:33.390 "conserve_cpu": false, 00:14:33.390 "filename": "/dev/ng0n1", 00:14:33.390 "name": "xnvme_bdev" 00:14:33.390 }, 00:14:33.390 "method": "bdev_xnvme_create" 00:14:33.390 }, 00:14:33.390 { 00:14:33.390 "method": "bdev_wait_for_examine" 00:14:33.390 } 00:14:33.390 ] 00:14:33.390 } 00:14:33.390 ] 00:14:33.390 } 00:14:33.651 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:33.651 fio-3.35 00:14:33.651 Starting 1 thread 00:14:40.242 00:14:40.242 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71250: Tue Nov 26 13:27:27 2024 00:14:40.242 write: IOPS=39.6k, BW=155MiB/s (162MB/s)(775MiB/5002msec); 0 zone resets 00:14:40.242 slat (nsec): min=2788, max=86905, avg=3690.37, stdev=1893.76 00:14:40.242 clat (usec): min=134, max=11001, avg=1472.50, stdev=344.73 00:14:40.242 lat (usec): min=138, max=11006, avg=1476.19, stdev=345.05 00:14:40.242 clat percentiles (usec): 00:14:40.242 | 1.00th=[ 799], 5.00th=[ 1037], 10.00th=[ 1123], 20.00th=[ 1205], 00:14:40.242 | 30.00th=[ 1287], 40.00th=[ 1369], 50.00th=[ 1450], 60.00th=[ 1532], 00:14:40.242 | 70.00th=[ 1614], 80.00th=[ 1713], 90.00th=[ 1860], 95.00th=[ 1975], 00:14:40.242 | 99.00th=[ 2278], 99.50th=[ 2507], 99.90th=[ 3654], 99.95th=[ 5080], 00:14:40.242 | 99.99th=[10028] 00:14:40.242 bw ( KiB/s): min=142432, max=178160, per=100.00%, avg=160567.56, stdev=16233.92, samples=9 00:14:40.242 iops : min=35608, max=44542, avg=40141.89, stdev=4058.49, samples=9 00:14:40.242 lat (usec) : 250=0.01%, 500=0.19%, 750=0.51%, 1000=3.07% 00:14:40.242 lat (msec) : 2=91.86%, 4=4.28%, 10=0.08%, 20=0.01% 00:14:40.242 cpu : usr=38.45%, sys=60.23%, ctx=13, majf=0, minf=762 00:14:40.242 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.4%, 16=23.0%, 32=54.0%, >=64=1.8% 00:14:40.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.242 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.4%, >=64=0.0% 00:14:40.242 issued rwts: total=0,198278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.242 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:40.242 00:14:40.242 Run status group 0 (all jobs): 00:14:40.242 WRITE: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=775MiB (812MB), run=5002-5002msec 00:14:40.242 ----------------------------------------------------- 00:14:40.242 Suppressions used: 00:14:40.242 count bytes template 00:14:40.242 1 11 /usr/src/fio/parse.c 00:14:40.242 1 8 libtcmalloc_minimal.so 00:14:40.242 1 904 libcrypto.so 00:14:40.242 ----------------------------------------------------- 00:14:40.242 00:14:40.242 ************************************ 00:14:40.242 END TEST xnvme_fio_plugin 00:14:40.242 ************************************ 00:14:40.242 00:14:40.242 real 0m13.916s 00:14:40.242 user 0m6.624s 00:14:40.242 sys 0m6.839s 00:14:40.242 13:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.242 13:27:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:40.504 13:27:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:40.504 13:27:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:40.504 13:27:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:40.504 13:27:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:40.504 13:27:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:40.504 13:27:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.504 13:27:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.504 ************************************ 00:14:40.504 START TEST xnvme_rpc 00:14:40.504 ************************************ 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71335 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71335 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71335 ']' 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.504 13:27:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.504 [2024-11-26 13:27:28.960734] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:40.504 [2024-11-26 13:27:28.961137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71335 ] 00:14:40.764 [2024-11-26 13:27:29.128904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.764 [2024-11-26 13:27:29.271472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.707 xnvme_bdev 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71335 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71335 ']' 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71335 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71335 00:14:41.707 killing process with pid 71335 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71335' 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71335 00:14:41.707 13:27:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71335 00:14:43.621 00:14:43.621 real 0m3.039s 00:14:43.621 user 0m2.959s 00:14:43.621 sys 0m0.552s 00:14:43.621 ************************************ 00:14:43.621 END TEST xnvme_rpc 00:14:43.621 ************************************ 00:14:43.621 13:27:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.621 13:27:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.621 13:27:31 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:43.621 13:27:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:43.621 13:27:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.621 13:27:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:43.621 ************************************ 00:14:43.621 START TEST xnvme_bdevperf 00:14:43.621 ************************************ 00:14:43.621 13:27:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:43.621 13:27:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:43.621 13:27:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:43.621 13:27:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:43.621 13:27:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:43.621 13:27:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:43.621 13:27:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:43.621 13:27:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:43.621 { 00:14:43.621 "subsystems": [ 00:14:43.621 { 00:14:43.621 "subsystem": "bdev", 00:14:43.621 "config": [ 00:14:43.621 { 00:14:43.621 "params": { 00:14:43.621 "io_mechanism": "io_uring_cmd", 00:14:43.621 "conserve_cpu": true, 00:14:43.621 "filename": "/dev/ng0n1", 00:14:43.621 "name": "xnvme_bdev" 00:14:43.621 }, 00:14:43.621 "method": "bdev_xnvme_create" 00:14:43.621 }, 00:14:43.621 { 00:14:43.621 "method": "bdev_wait_for_examine" 00:14:43.621 } 00:14:43.621 ] 00:14:43.621 } 00:14:43.621 ] 00:14:43.621 } 00:14:43.621 [2024-11-26 13:27:32.024384] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:43.621 [2024-11-26 13:27:32.024531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71409 ] 00:14:43.621 [2024-11-26 13:27:32.181420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.879 [2024-11-26 13:27:32.279201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.138 Running I/O for 5 seconds... 00:14:46.020 43303.00 IOPS, 169.15 MiB/s [2024-11-26T13:27:35.529Z] 42054.50 IOPS, 164.28 MiB/s [2024-11-26T13:27:36.911Z] 41156.33 IOPS, 160.77 MiB/s [2024-11-26T13:27:37.855Z] 40803.25 IOPS, 159.39 MiB/s [2024-11-26T13:27:37.855Z] 39695.40 IOPS, 155.06 MiB/s 00:14:49.285 Latency(us) 00:14:49.285 [2024-11-26T13:27:37.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.285 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:49.285 xnvme_bdev : 5.01 39639.91 154.84 0.00 0.00 1610.70 598.65 5797.42 00:14:49.285 [2024-11-26T13:27:37.855Z] =================================================================================================================== 00:14:49.285 [2024-11-26T13:27:37.855Z] Total : 39639.91 154.84 0.00 0.00 1610.70 598.65 5797.42 00:14:49.854 13:27:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.855 13:27:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:49.855 13:27:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:49.855 13:27:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:49.855 13:27:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.855 { 00:14:49.855 "subsystems": [ 00:14:49.855 { 00:14:49.855 "subsystem": "bdev", 00:14:49.855 "config": [ 00:14:49.855 { 00:14:49.855 "params": { 00:14:49.855 "io_mechanism": "io_uring_cmd", 00:14:49.855 "conserve_cpu": true, 00:14:49.855 "filename": "/dev/ng0n1", 00:14:49.855 "name": "xnvme_bdev" 00:14:49.855 }, 00:14:49.855 "method": "bdev_xnvme_create" 00:14:49.855 }, 00:14:49.855 { 00:14:49.855 "method": "bdev_wait_for_examine" 00:14:49.855 } 00:14:49.855 ] 00:14:49.855 } 00:14:49.855 ] 00:14:49.855 } 00:14:49.855 [2024-11-26 13:27:38.343790] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:49.855 [2024-11-26 13:27:38.343921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71482 ] 00:14:50.114 [2024-11-26 13:27:38.508716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.114 [2024-11-26 13:27:38.632937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.375 Running I/O for 5 seconds... 00:14:52.699 38361.00 IOPS, 149.85 MiB/s [2024-11-26T13:27:42.214Z] 40759.00 IOPS, 159.21 MiB/s [2024-11-26T13:27:43.158Z] 41771.00 IOPS, 163.17 MiB/s [2024-11-26T13:27:44.102Z] 42197.00 IOPS, 164.83 MiB/s [2024-11-26T13:27:44.102Z] 42680.40 IOPS, 166.72 MiB/s 00:14:55.532 Latency(us) 00:14:55.532 [2024-11-26T13:27:44.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.532 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:55.532 xnvme_bdev : 5.01 42640.14 166.56 0.00 0.00 1496.89 293.02 5999.06 00:14:55.532 [2024-11-26T13:27:44.102Z] =================================================================================================================== 00:14:55.532 [2024-11-26T13:27:44.102Z] Total : 42640.14 166.56 0.00 0.00 1496.89 293.02 5999.06 00:14:56.476 13:27:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:56.476 13:27:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:56.476 13:27:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:56.476 13:27:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:56.476 13:27:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:56.476 { 00:14:56.476 "subsystems": [ 00:14:56.476 { 00:14:56.476 "subsystem": "bdev", 00:14:56.476 "config": [ 00:14:56.476 { 00:14:56.476 "params": { 00:14:56.476 "io_mechanism": "io_uring_cmd", 00:14:56.476 "conserve_cpu": true, 00:14:56.476 "filename": "/dev/ng0n1", 00:14:56.476 "name": "xnvme_bdev" 00:14:56.476 }, 00:14:56.476 "method": "bdev_xnvme_create" 00:14:56.476 }, 00:14:56.476 { 00:14:56.476 "method": "bdev_wait_for_examine" 00:14:56.476 } 00:14:56.476 ] 00:14:56.476 } 00:14:56.476 ] 00:14:56.476 } 00:14:56.476 [2024-11-26 13:27:44.781369] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:14:56.476 [2024-11-26 13:27:44.781551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71554 ] 00:14:56.476 [2024-11-26 13:27:44.946199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.737 [2024-11-26 13:27:45.066226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.998 Running I/O for 5 seconds... 00:14:58.886 77248.00 IOPS, 301.75 MiB/s [2024-11-26T13:27:48.397Z] 75360.00 IOPS, 294.38 MiB/s [2024-11-26T13:27:49.442Z] 74410.67 IOPS, 290.67 MiB/s [2024-11-26T13:27:50.444Z] 75904.00 IOPS, 296.50 MiB/s [2024-11-26T13:27:50.444Z] 77683.20 IOPS, 303.45 MiB/s 00:15:01.874 Latency(us) 00:15:01.874 [2024-11-26T13:27:50.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.874 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:01.874 xnvme_bdev : 5.00 77659.90 303.36 0.00 0.00 820.56 400.15 2923.91 00:15:01.874 [2024-11-26T13:27:50.444Z] =================================================================================================================== 00:15:01.874 [2024-11-26T13:27:50.444Z] Total : 77659.90 303.36 0.00 0.00 820.56 400.15 2923.91 00:15:02.875 13:27:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.875 13:27:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:02.875 13:27:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:02.875 13:27:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:02.875 13:27:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:02.875 { 00:15:02.875 "subsystems": [ 00:15:02.875 { 00:15:02.875 "subsystem": "bdev", 00:15:02.875 "config": [ 00:15:02.875 { 00:15:02.875 "params": { 00:15:02.875 "io_mechanism": "io_uring_cmd", 00:15:02.875 "conserve_cpu": true, 00:15:02.875 "filename": "/dev/ng0n1", 00:15:02.875 "name": "xnvme_bdev" 00:15:02.875 }, 00:15:02.875 "method": "bdev_xnvme_create" 00:15:02.875 }, 00:15:02.875 { 00:15:02.875 "method": "bdev_wait_for_examine" 00:15:02.875 } 00:15:02.875 ] 00:15:02.875 } 00:15:02.875 ] 00:15:02.875 } 00:15:02.875 [2024-11-26 13:27:51.211841] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:15:02.875 [2024-11-26 13:27:51.212002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71634 ] 00:15:02.875 [2024-11-26 13:27:51.378059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.136 [2024-11-26 13:27:51.492142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.397 Running I/O for 5 seconds... 00:15:05.283 39338.00 IOPS, 153.66 MiB/s [2024-11-26T13:27:54.794Z] 39083.00 IOPS, 152.67 MiB/s [2024-11-26T13:27:56.178Z] 38618.67 IOPS, 150.85 MiB/s [2024-11-26T13:27:57.118Z] 38550.00 IOPS, 150.59 MiB/s [2024-11-26T13:27:57.118Z] 38267.40 IOPS, 149.48 MiB/s 00:15:08.548 Latency(us) 00:15:08.548 [2024-11-26T13:27:57.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.548 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:08.548 xnvme_bdev : 5.00 38248.09 149.41 0.00 0.00 1667.97 126.03 23895.43 00:15:08.548 [2024-11-26T13:27:57.118Z] =================================================================================================================== 00:15:08.548 [2024-11-26T13:27:57.118Z] Total : 38248.09 149.41 0.00 0.00 1667.97 126.03 23895.43 00:15:09.118 00:15:09.118 real 0m25.599s 00:15:09.118 user 0m17.492s 00:15:09.118 sys 0m5.850s 00:15:09.118 13:27:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.118 13:27:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:09.118 ************************************ 00:15:09.118 END TEST xnvme_bdevperf 00:15:09.118 ************************************ 00:15:09.118 13:27:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:09.118 13:27:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:09.118 13:27:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.118 13:27:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.118 ************************************ 00:15:09.118 START TEST xnvme_fio_plugin 00:15:09.118 ************************************ 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:09.118 { 00:15:09.118 "subsystems": [ 00:15:09.118 { 00:15:09.118 "subsystem": "bdev", 00:15:09.118 "config": [ 00:15:09.118 { 00:15:09.118 "params": { 00:15:09.118 "io_mechanism": "io_uring_cmd", 00:15:09.118 "conserve_cpu": true, 00:15:09.118 "filename": "/dev/ng0n1", 00:15:09.118 "name": "xnvme_bdev" 00:15:09.118 }, 00:15:09.118 "method": "bdev_xnvme_create" 00:15:09.118 }, 00:15:09.118 { 00:15:09.118 "method": "bdev_wait_for_examine" 00:15:09.118 } 00:15:09.118 ] 00:15:09.118 } 00:15:09.118 ] 00:15:09.118 } 00:15:09.118 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:09.119 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:09.119 13:27:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:09.378 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:09.378 fio-3.35 00:15:09.378 Starting 1 thread 00:15:15.964 00:15:15.964 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71757: Tue Nov 26 13:28:03 2024 00:15:15.964 read: IOPS=39.7k, BW=155MiB/s (163MB/s)(776MiB/5001msec) 00:15:15.964 slat (usec): min=2, max=229, avg= 3.33, stdev= 2.03 00:15:15.964 clat (usec): min=656, max=3765, avg=1478.24, stdev=288.54 00:15:15.964 lat (usec): min=658, max=3800, avg=1481.57, stdev=288.96 00:15:15.964 clat percentiles (usec): 00:15:15.964 | 1.00th=[ 1020], 5.00th=[ 1106], 10.00th=[ 1156], 20.00th=[ 1237], 00:15:15.964 | 30.00th=[ 1303], 40.00th=[ 1369], 50.00th=[ 1434], 60.00th=[ 1500], 00:15:15.964 | 70.00th=[ 1582], 80.00th=[ 1696], 90.00th=[ 1876], 95.00th=[ 2024], 00:15:15.964 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2835], 99.95th=[ 3097], 00:15:15.964 | 99.99th=[ 3556] 00:15:15.964 bw ( KiB/s): min=142080, max=174336, per=98.97%, avg=157152.22, stdev=10959.61, samples=9 00:15:15.964 iops : min=35520, max=43584, avg=39288.00, stdev=2739.99, samples=9 00:15:15.964 lat (usec) : 750=0.01%, 1000=0.64% 00:15:15.964 lat (msec) : 2=93.67%, 4=5.68% 00:15:15.964 cpu : usr=67.66%, sys=29.08%, ctx=23, majf=0, minf=762 00:15:15.964 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:15.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.964 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:15.964 issued rwts: total=198528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.964 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:15.964 00:15:15.964 Run status group 0 (all jobs): 00:15:15.964 READ: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=776MiB (813MB), run=5001-5001msec 00:15:16.226 ----------------------------------------------------- 00:15:16.226 Suppressions used: 00:15:16.226 count bytes template 00:15:16.226 1 11 /usr/src/fio/parse.c 00:15:16.226 1 8 libtcmalloc_minimal.so 00:15:16.226 1 904 libcrypto.so 00:15:16.226 ----------------------------------------------------- 00:15:16.226 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:16.226 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:16.227 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:16.227 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:16.227 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:16.227 { 00:15:16.227 "subsystems": [ 00:15:16.227 { 00:15:16.227 "subsystem": "bdev", 00:15:16.227 "config": [ 00:15:16.227 { 00:15:16.227 "params": { 00:15:16.227 "io_mechanism": "io_uring_cmd", 00:15:16.227 "conserve_cpu": true, 00:15:16.227 "filename": "/dev/ng0n1", 00:15:16.227 "name": "xnvme_bdev" 00:15:16.227 }, 00:15:16.227 "method": "bdev_xnvme_create" 00:15:16.227 }, 00:15:16.227 { 00:15:16.227 "method": "bdev_wait_for_examine" 00:15:16.227 } 00:15:16.227 ] 00:15:16.227 } 00:15:16.227 ] 00:15:16.227 } 00:15:16.227 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:16.227 13:28:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:16.227 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:16.227 fio-3.35 00:15:16.227 Starting 1 thread 00:15:22.815 00:15:22.815 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71842: Tue Nov 26 13:28:10 2024 00:15:22.815 write: IOPS=40.3k, BW=158MiB/s (165MB/s)(788MiB/5002msec); 0 zone resets 00:15:22.815 slat (usec): min=2, max=260, avg= 3.82, stdev= 2.22 00:15:22.815 clat (usec): min=369, max=8396, avg=1434.01, stdev=275.09 00:15:22.815 lat (usec): min=372, max=8399, avg=1437.83, stdev=275.62 00:15:22.815 clat percentiles (usec): 00:15:22.815 | 1.00th=[ 1004], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1205], 00:15:22.815 | 30.00th=[ 1270], 40.00th=[ 1336], 50.00th=[ 1401], 60.00th=[ 1467], 00:15:22.815 | 70.00th=[ 1549], 80.00th=[ 1647], 90.00th=[ 1778], 95.00th=[ 1909], 00:15:22.815 | 99.00th=[ 2180], 99.50th=[ 2311], 99.90th=[ 3064], 99.95th=[ 3556], 00:15:22.815 | 99.99th=[ 5014] 00:15:22.815 bw ( KiB/s): min=149392, max=179184, per=99.25%, avg=160187.56, stdev=12293.98, samples=9 00:15:22.815 iops : min=37348, max=44796, avg=40046.89, stdev=3073.50, samples=9 00:15:22.815 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.82% 00:15:22.815 lat (msec) : 2=96.17%, 4=2.97%, 10=0.04% 00:15:22.815 cpu : usr=57.75%, sys=37.11%, ctx=18, majf=0, minf=762 00:15:22.815 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.6% 00:15:22.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.815 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:22.815 issued rwts: total=0,201831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.815 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:22.815 00:15:22.815 Run status group 0 (all jobs): 00:15:22.816 WRITE: bw=158MiB/s (165MB/s), 158MiB/s-158MiB/s (165MB/s-165MB/s), io=788MiB (827MB), run=5002-5002msec 00:15:23.076 ----------------------------------------------------- 00:15:23.076 Suppressions used: 00:15:23.076 count bytes template 00:15:23.076 1 11 /usr/src/fio/parse.c 00:15:23.076 1 8 libtcmalloc_minimal.so 00:15:23.076 1 904 libcrypto.so 00:15:23.076 ----------------------------------------------------- 00:15:23.076 00:15:23.076 00:15:23.076 real 0m13.909s 00:15:23.076 user 0m9.170s 00:15:23.076 sys 0m3.978s 00:15:23.076 13:28:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.076 ************************************ 00:15:23.076 END TEST xnvme_fio_plugin 00:15:23.076 ************************************ 00:15:23.076 13:28:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:23.076 13:28:11 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71335 00:15:23.076 13:28:11 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71335 ']' 00:15:23.076 13:28:11 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71335 00:15:23.076 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71335) - No such process 00:15:23.076 Process with pid 71335 is not found 00:15:23.076 13:28:11 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71335 is not found' 00:15:23.076 ************************************ 00:15:23.076 END TEST nvme_xnvme 00:15:23.076 ************************************ 00:15:23.076 00:15:23.076 real 3m31.735s 00:15:23.076 user 2m1.175s 00:15:23.076 sys 1m16.030s 00:15:23.076 13:28:11 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.076 13:28:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.338 13:28:11 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:23.338 13:28:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:23.338 13:28:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.338 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:15:23.338 ************************************ 00:15:23.338 START TEST blockdev_xnvme 00:15:23.338 ************************************ 00:15:23.338 13:28:11 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:23.338 * Looking for test storage... 00:15:23.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:23.338 13:28:11 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:23.338 13:28:11 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:23.338 13:28:11 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:23.338 13:28:11 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.338 13:28:11 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:23.338 13:28:11 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.338 13:28:11 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:23.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.338 --rc genhtml_branch_coverage=1 00:15:23.338 --rc genhtml_function_coverage=1 00:15:23.338 --rc genhtml_legend=1 00:15:23.338 --rc geninfo_all_blocks=1 00:15:23.338 --rc geninfo_unexecuted_blocks=1 00:15:23.338 00:15:23.338 ' 00:15:23.338 13:28:11 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:23.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.338 --rc genhtml_branch_coverage=1 00:15:23.338 --rc genhtml_function_coverage=1 00:15:23.338 --rc genhtml_legend=1 00:15:23.338 --rc geninfo_all_blocks=1 00:15:23.338 --rc geninfo_unexecuted_blocks=1 00:15:23.338 00:15:23.338 ' 00:15:23.338 13:28:11 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:23.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.338 --rc genhtml_branch_coverage=1 00:15:23.338 --rc genhtml_function_coverage=1 00:15:23.338 --rc genhtml_legend=1 00:15:23.338 --rc geninfo_all_blocks=1 00:15:23.338 --rc geninfo_unexecuted_blocks=1 00:15:23.338 00:15:23.338 ' 00:15:23.338 13:28:11 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:23.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.338 --rc genhtml_branch_coverage=1 00:15:23.338 --rc genhtml_function_coverage=1 00:15:23.338 --rc genhtml_legend=1 00:15:23.338 --rc geninfo_all_blocks=1 00:15:23.338 --rc geninfo_unexecuted_blocks=1 00:15:23.338 00:15:23.338 ' 00:15:23.338 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:23.338 13:28:11 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:23.338 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:23.338 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:23.338 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71982 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:23.339 13:28:11 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71982 00:15:23.339 13:28:11 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71982 ']' 00:15:23.339 13:28:11 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.339 13:28:11 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.339 13:28:11 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.339 13:28:11 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.339 13:28:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.600 [2024-11-26 13:28:11.912045] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:15:23.600 [2024-11-26 13:28:11.912463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71982 ] 00:15:23.600 [2024-11-26 13:28:12.078035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.860 [2024-11-26 13:28:12.219007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.800 13:28:13 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.800 13:28:13 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:24.800 13:28:13 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:24.800 13:28:13 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:24.800 13:28:13 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:24.800 13:28:13 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:24.800 13:28:13 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:25.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:25.632 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:25.632 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:25.632 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:25.632 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:25.632 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:25.632 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.633 nvme0n1 00:15:25.633 nvme0n2 00:15:25.633 nvme0n3 00:15:25.633 nvme1n1 00:15:25.633 nvme2n1 00:15:25.633 nvme3n1 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:25.633 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.633 13:28:14 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.895 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:25.895 13:28:14 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.895 13:28:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.895 13:28:14 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.895 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:25.895 13:28:14 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.895 13:28:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.895 13:28:14 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.895 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:25.895 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:25.895 13:28:14 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.895 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:25.895 13:28:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.895 13:28:14 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.895 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:25.895 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:25.896 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "658645d7-68fa-41bc-ae93-192d0fb33155"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "658645d7-68fa-41bc-ae93-192d0fb33155",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "022f3660-ca86-47f2-8071-86feee243bd6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "022f3660-ca86-47f2-8071-86feee243bd6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "7e09126b-079c-466c-92dd-c67b5e9c075c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7e09126b-079c-466c-92dd-c67b5e9c075c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "66ce568e-445d-4547-8112-1993c48c637e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "66ce568e-445d-4547-8112-1993c48c637e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a2d968a3-ec45-4099-9644-a27855bb41b9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a2d968a3-ec45-4099-9644-a27855bb41b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "182cf671-2c83-47ab-ad1c-fc0c915219c9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "182cf671-2c83-47ab-ad1c-fc0c915219c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:25.896 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:25.896 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:25.896 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:25.896 13:28:14 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71982 00:15:25.896 13:28:14 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71982 ']' 00:15:25.896 13:28:14 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71982 00:15:25.896 13:28:14 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:25.896 13:28:14 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.896 13:28:14 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71982 00:15:25.896 killing process with pid 71982 00:15:25.896 13:28:14 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.896 13:28:14 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.896 13:28:14 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71982' 00:15:25.896 13:28:14 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71982 00:15:25.896 13:28:14 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71982 00:15:27.809 13:28:16 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:27.809 13:28:16 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:27.809 13:28:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:27.809 13:28:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.809 13:28:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.809 ************************************ 00:15:27.809 START TEST bdev_hello_world 00:15:27.809 ************************************ 00:15:27.809 13:28:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:27.809 [2024-11-26 13:28:16.239458] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:15:27.809 [2024-11-26 13:28:16.239939] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72266 ] 00:15:28.069 [2024-11-26 13:28:16.406297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.069 [2024-11-26 13:28:16.549628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.640 [2024-11-26 13:28:16.992192] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:28.640 [2024-11-26 13:28:16.992508] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:28.640 [2024-11-26 13:28:16.992540] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:28.640 [2024-11-26 13:28:16.994911] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:28.640 [2024-11-26 13:28:16.996081] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:28.640 [2024-11-26 13:28:16.996295] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:28.640 [2024-11-26 13:28:16.996826] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:28.640 00:15:28.640 [2024-11-26 13:28:16.996879] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:29.581 00:15:29.581 real 0m1.683s 00:15:29.581 user 0m1.253s 00:15:29.581 sys 0m0.276s 00:15:29.581 13:28:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.581 13:28:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:29.581 ************************************ 00:15:29.581 END TEST bdev_hello_world 00:15:29.581 ************************************ 00:15:29.581 13:28:17 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:29.581 13:28:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.581 13:28:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.581 13:28:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.581 ************************************ 00:15:29.581 START TEST bdev_bounds 00:15:29.581 ************************************ 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72303 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:29.581 Process bdevio pid: 72303 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72303' 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72303 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72303 ']' 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.581 13:28:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:29.581 [2024-11-26 13:28:17.989785] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:15:29.581 [2024-11-26 13:28:17.991549] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72303 ] 00:15:29.842 [2024-11-26 13:28:18.158737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:29.842 [2024-11-26 13:28:18.306959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.842 [2024-11-26 13:28:18.307664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.842 [2024-11-26 13:28:18.307787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.414 13:28:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.414 13:28:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:30.414 13:28:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:30.414 I/O targets: 00:15:30.414 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:30.414 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:30.414 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:30.414 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:30.414 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:30.414 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:30.414 00:15:30.414 00:15:30.414 CUnit - A unit testing framework for C - Version 2.1-3 00:15:30.414 http://cunit.sourceforge.net/ 00:15:30.414 00:15:30.414 00:15:30.414 Suite: bdevio tests on: nvme3n1 00:15:30.414 Test: blockdev write read block ...passed 00:15:30.414 Test: blockdev write zeroes read block ...passed 00:15:30.414 Test: blockdev write zeroes read no split ...passed 00:15:30.675 Test: blockdev write zeroes read split ...passed 00:15:30.675 Test: blockdev write zeroes read split partial ...passed 00:15:30.675 Test: blockdev reset ...passed 00:15:30.675 Test: blockdev write read 8 blocks ...passed 00:15:30.675 Test: blockdev write read size > 128k ...passed 00:15:30.675 Test: blockdev write read invalid size ...passed 00:15:30.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.675 Test: blockdev write read max offset ...passed 00:15:30.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.675 Test: blockdev writev readv 8 blocks ...passed 00:15:30.675 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.675 Test: blockdev writev readv block ...passed 00:15:30.675 Test: blockdev writev readv size > 128k ...passed 00:15:30.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.675 Test: blockdev comparev and writev ...passed 00:15:30.675 Test: blockdev nvme passthru rw ...passed 00:15:30.675 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.675 Test: blockdev nvme admin passthru ...passed 00:15:30.675 Test: blockdev copy ...passed 00:15:30.675 Suite: bdevio tests on: nvme2n1 00:15:30.675 Test: blockdev write read block ...passed 00:15:30.675 Test: blockdev write zeroes read block ...passed 00:15:30.675 Test: blockdev write zeroes read no split ...passed 00:15:30.675 Test: blockdev write zeroes read split ...passed 00:15:30.675 Test: blockdev write zeroes read split partial ...passed 00:15:30.675 Test: blockdev reset ...passed 00:15:30.675 Test: blockdev write read 8 blocks ...passed 00:15:30.675 Test: blockdev write read size > 128k ...passed 00:15:30.675 Test: blockdev write read invalid size ...passed 00:15:30.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.675 Test: blockdev write read max offset ...passed 00:15:30.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.675 Test: blockdev writev readv 8 blocks ...passed 00:15:30.675 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.675 Test: blockdev writev readv block ...passed 00:15:30.675 Test: blockdev writev readv size > 128k ...passed 00:15:30.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.675 Test: blockdev comparev and writev ...passed 00:15:30.675 Test: blockdev nvme passthru rw ...passed 00:15:30.675 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.675 Test: blockdev nvme admin passthru ...passed 00:15:30.675 Test: blockdev copy ...passed 00:15:30.675 Suite: bdevio tests on: nvme1n1 00:15:30.675 Test: blockdev write read block ...passed 00:15:30.675 Test: blockdev write zeroes read block ...passed 00:15:30.675 Test: blockdev write zeroes read no split ...passed 00:15:30.675 Test: blockdev write zeroes read split ...passed 00:15:30.675 Test: blockdev write zeroes read split partial ...passed 00:15:30.675 Test: blockdev reset ...passed 00:15:30.675 Test: blockdev write read 8 blocks ...passed 00:15:30.675 Test: blockdev write read size > 128k ...passed 00:15:30.675 Test: blockdev write read invalid size ...passed 00:15:30.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.675 Test: blockdev write read max offset ...passed 00:15:30.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.675 Test: blockdev writev readv 8 blocks ...passed 00:15:30.675 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.675 Test: blockdev writev readv block ...passed 00:15:30.675 Test: blockdev writev readv size > 128k ...passed 00:15:30.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.675 Test: blockdev comparev and writev ...passed 00:15:30.675 Test: blockdev nvme passthru rw ...passed 00:15:30.675 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.675 Test: blockdev nvme admin passthru ...passed 00:15:30.675 Test: blockdev copy ...passed 00:15:30.675 Suite: bdevio tests on: nvme0n3 00:15:30.675 Test: blockdev write read block ...passed 00:15:30.675 Test: blockdev write zeroes read block ...passed 00:15:30.675 Test: blockdev write zeroes read no split ...passed 00:15:30.675 Test: blockdev write zeroes read split ...passed 00:15:30.936 Test: blockdev write zeroes read split partial ...passed 00:15:30.936 Test: blockdev reset ...passed 00:15:30.936 Test: blockdev write read 8 blocks ...passed 00:15:30.936 Test: blockdev write read size > 128k ...passed 00:15:30.936 Test: blockdev write read invalid size ...passed 00:15:30.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.936 Test: blockdev write read max offset ...passed 00:15:30.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.936 Test: blockdev writev readv 8 blocks ...passed 00:15:30.936 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.936 Test: blockdev writev readv block ...passed 00:15:30.936 Test: blockdev writev readv size > 128k ...passed 00:15:30.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.936 Test: blockdev comparev and writev ...passed 00:15:30.936 Test: blockdev nvme passthru rw ...passed 00:15:30.936 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.936 Test: blockdev nvme admin passthru ...passed 00:15:30.936 Test: blockdev copy ...passed 00:15:30.936 Suite: bdevio tests on: nvme0n2 00:15:30.936 Test: blockdev write read block ...passed 00:15:30.936 Test: blockdev write zeroes read block ...passed 00:15:30.936 Test: blockdev write zeroes read no split ...passed 00:15:30.936 Test: blockdev write zeroes read split ...passed 00:15:30.936 Test: blockdev write zeroes read split partial ...passed 00:15:30.936 Test: blockdev reset ...passed 00:15:30.936 Test: blockdev write read 8 blocks ...passed 00:15:30.936 Test: blockdev write read size > 128k ...passed 00:15:30.936 Test: blockdev write read invalid size ...passed 00:15:30.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.936 Test: blockdev write read max offset ...passed 00:15:30.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.936 Test: blockdev writev readv 8 blocks ...passed 00:15:30.936 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.936 Test: blockdev writev readv block ...passed 00:15:30.936 Test: blockdev writev readv size > 128k ...passed 00:15:30.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.936 Test: blockdev comparev and writev ...passed 00:15:30.936 Test: blockdev nvme passthru rw ...passed 00:15:30.936 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.936 Test: blockdev nvme admin passthru ...passed 00:15:30.936 Test: blockdev copy ...passed 00:15:30.936 Suite: bdevio tests on: nvme0n1 00:15:30.936 Test: blockdev write read block ...passed 00:15:30.936 Test: blockdev write zeroes read block ...passed 00:15:30.936 Test: blockdev write zeroes read no split ...passed 00:15:30.936 Test: blockdev write zeroes read split ...passed 00:15:30.936 Test: blockdev write zeroes read split partial ...passed 00:15:30.936 Test: blockdev reset ...passed 00:15:30.936 Test: blockdev write read 8 blocks ...passed 00:15:30.936 Test: blockdev write read size > 128k ...passed 00:15:30.936 Test: blockdev write read invalid size ...passed 00:15:30.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.936 Test: blockdev write read max offset ...passed 00:15:30.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.936 Test: blockdev writev readv 8 blocks ...passed 00:15:30.936 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.936 Test: blockdev writev readv block ...passed 00:15:30.936 Test: blockdev writev readv size > 128k ...passed 00:15:30.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.936 Test: blockdev comparev and writev ...passed 00:15:30.936 Test: blockdev nvme passthru rw ...passed 00:15:30.936 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.936 Test: blockdev nvme admin passthru ...passed 00:15:30.936 Test: blockdev copy ...passed 00:15:30.936 00:15:30.936 Run Summary: Type Total Ran Passed Failed Inactive 00:15:30.936 suites 6 6 n/a 0 0 00:15:30.936 tests 138 138 138 0 0 00:15:30.936 asserts 780 780 780 0 n/a 00:15:30.936 00:15:30.936 Elapsed time = 1.257 seconds 00:15:30.936 0 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72303 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72303 ']' 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72303 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72303 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72303' 00:15:30.936 killing process with pid 72303 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72303 00:15:30.936 13:28:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72303 00:15:31.878 13:28:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:31.878 00:15:31.878 real 0m2.443s 00:15:31.878 user 0m5.763s 00:15:31.878 sys 0m0.426s 00:15:31.878 13:28:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.878 ************************************ 00:15:31.878 13:28:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:31.878 END TEST bdev_bounds 00:15:31.878 ************************************ 00:15:31.878 13:28:20 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:31.878 13:28:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:31.878 13:28:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.878 13:28:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.878 ************************************ 00:15:31.878 START TEST bdev_nbd 00:15:31.878 ************************************ 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72360 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72360 /var/tmp/spdk-nbd.sock 00:15:31.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72360 ']' 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:31.878 13:28:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:32.139 [2024-11-26 13:28:20.509429] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:15:32.140 [2024-11-26 13:28:20.509589] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.140 [2024-11-26 13:28:20.668956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.400 [2024-11-26 13:28:20.821931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.972 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.233 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.233 1+0 records in 00:15:33.233 1+0 records out 00:15:33.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115135 s, 3.6 MB/s 00:15:33.234 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.234 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.234 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.234 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.234 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.234 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.234 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.234 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.495 1+0 records in 00:15:33.495 1+0 records out 00:15:33.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010164 s, 4.0 MB/s 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.495 13:28:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.787 1+0 records in 00:15:33.787 1+0 records out 00:15:33.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131048 s, 3.1 MB/s 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.787 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:34.047 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:34.047 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:34.047 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:34.047 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:34.047 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:34.047 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.047 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.047 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:34.047 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.048 1+0 records in 00:15:34.048 1+0 records out 00:15:34.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00097181 s, 4.2 MB/s 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.048 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.309 1+0 records in 00:15:34.309 1+0 records out 00:15:34.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106171 s, 3.9 MB/s 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.309 1+0 records in 00:15:34.309 1+0 records out 00:15:34.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0014789 s, 2.8 MB/s 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:34.309 13:28:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:34.570 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:34.570 { 00:15:34.570 "nbd_device": "/dev/nbd0", 00:15:34.570 "bdev_name": "nvme0n1" 00:15:34.570 }, 00:15:34.570 { 00:15:34.570 "nbd_device": "/dev/nbd1", 00:15:34.570 "bdev_name": "nvme0n2" 00:15:34.570 }, 00:15:34.570 { 00:15:34.570 "nbd_device": "/dev/nbd2", 00:15:34.570 "bdev_name": "nvme0n3" 00:15:34.570 }, 00:15:34.570 { 00:15:34.570 "nbd_device": "/dev/nbd3", 00:15:34.570 "bdev_name": "nvme1n1" 00:15:34.570 }, 00:15:34.570 { 00:15:34.570 "nbd_device": "/dev/nbd4", 00:15:34.570 "bdev_name": "nvme2n1" 00:15:34.570 }, 00:15:34.570 { 00:15:34.570 "nbd_device": "/dev/nbd5", 00:15:34.570 "bdev_name": "nvme3n1" 00:15:34.570 } 00:15:34.570 ]' 00:15:34.571 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:34.571 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:34.571 { 00:15:34.571 "nbd_device": "/dev/nbd0", 00:15:34.571 "bdev_name": "nvme0n1" 00:15:34.571 }, 00:15:34.571 { 00:15:34.571 "nbd_device": "/dev/nbd1", 00:15:34.571 "bdev_name": "nvme0n2" 00:15:34.571 }, 00:15:34.571 { 00:15:34.571 "nbd_device": "/dev/nbd2", 00:15:34.571 "bdev_name": "nvme0n3" 00:15:34.571 }, 00:15:34.571 { 00:15:34.571 "nbd_device": "/dev/nbd3", 00:15:34.571 "bdev_name": "nvme1n1" 00:15:34.571 }, 00:15:34.571 { 00:15:34.571 "nbd_device": "/dev/nbd4", 00:15:34.571 "bdev_name": "nvme2n1" 00:15:34.571 }, 00:15:34.571 { 00:15:34.571 "nbd_device": "/dev/nbd5", 00:15:34.571 "bdev_name": "nvme3n1" 00:15:34.571 } 00:15:34.571 ]' 00:15:34.571 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:34.571 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:34.571 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.571 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:34.571 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.571 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:34.571 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.571 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:34.831 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.831 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.831 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.831 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.831 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.831 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.831 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.831 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.831 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.831 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:35.092 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:35.092 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:35.092 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:35.092 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.092 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.092 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:35.092 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.092 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.092 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.092 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:35.353 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:35.353 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:35.353 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:35.353 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.353 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.353 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:35.353 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.353 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.353 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.353 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:35.614 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:35.614 13:28:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:35.614 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:35.614 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.614 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.614 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:35.614 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.614 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.614 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.614 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:35.876 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:35.876 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:35.876 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:35.876 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.876 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.876 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:35.876 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.876 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.876 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.876 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:36.137 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:36.396 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:36.397 /dev/nbd0 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.397 1+0 records in 00:15:36.397 1+0 records out 00:15:36.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638667 s, 6.4 MB/s 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.397 13:28:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:36.655 /dev/nbd1 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.655 1+0 records in 00:15:36.655 1+0 records out 00:15:36.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728855 s, 5.6 MB/s 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.655 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:36.916 /dev/nbd10 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.916 1+0 records in 00:15:36.916 1+0 records out 00:15:36.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116076 s, 3.5 MB/s 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.916 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:37.177 /dev/nbd11 00:15:37.177 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:37.177 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.178 1+0 records in 00:15:37.178 1+0 records out 00:15:37.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117822 s, 3.5 MB/s 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.178 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:37.439 /dev/nbd12 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.439 1+0 records in 00:15:37.439 1+0 records out 00:15:37.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000910496 s, 4.5 MB/s 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.439 13:28:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:37.701 /dev/nbd13 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.701 1+0 records in 00:15:37.701 1+0 records out 00:15:37.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143469 s, 2.9 MB/s 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.701 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd0", 00:15:37.963 "bdev_name": "nvme0n1" 00:15:37.963 }, 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd1", 00:15:37.963 "bdev_name": "nvme0n2" 00:15:37.963 }, 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd10", 00:15:37.963 "bdev_name": "nvme0n3" 00:15:37.963 }, 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd11", 00:15:37.963 "bdev_name": "nvme1n1" 00:15:37.963 }, 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd12", 00:15:37.963 "bdev_name": "nvme2n1" 00:15:37.963 }, 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd13", 00:15:37.963 "bdev_name": "nvme3n1" 00:15:37.963 } 00:15:37.963 ]' 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd0", 00:15:37.963 "bdev_name": "nvme0n1" 00:15:37.963 }, 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd1", 00:15:37.963 "bdev_name": "nvme0n2" 00:15:37.963 }, 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd10", 00:15:37.963 "bdev_name": "nvme0n3" 00:15:37.963 }, 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd11", 00:15:37.963 "bdev_name": "nvme1n1" 00:15:37.963 }, 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd12", 00:15:37.963 "bdev_name": "nvme2n1" 00:15:37.963 }, 00:15:37.963 { 00:15:37.963 "nbd_device": "/dev/nbd13", 00:15:37.963 "bdev_name": "nvme3n1" 00:15:37.963 } 00:15:37.963 ]' 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:37.963 /dev/nbd1 00:15:37.963 /dev/nbd10 00:15:37.963 /dev/nbd11 00:15:37.963 /dev/nbd12 00:15:37.963 /dev/nbd13' 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:37.963 /dev/nbd1 00:15:37.963 /dev/nbd10 00:15:37.963 /dev/nbd11 00:15:37.963 /dev/nbd12 00:15:37.963 /dev/nbd13' 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:37.963 256+0 records in 00:15:37.963 256+0 records out 00:15:37.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00780673 s, 134 MB/s 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.963 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:38.224 256+0 records in 00:15:38.224 256+0 records out 00:15:38.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240115 s, 4.4 MB/s 00:15:38.224 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.224 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:38.485 256+0 records in 00:15:38.485 256+0 records out 00:15:38.485 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24581 s, 4.3 MB/s 00:15:38.485 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.485 13:28:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:38.745 256+0 records in 00:15:38.745 256+0 records out 00:15:38.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245906 s, 4.3 MB/s 00:15:38.745 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.745 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:39.006 256+0 records in 00:15:39.006 256+0 records out 00:15:39.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245602 s, 4.3 MB/s 00:15:39.006 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:39.006 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:39.266 256+0 records in 00:15:39.266 256+0 records out 00:15:39.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251008 s, 4.2 MB/s 00:15:39.266 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:39.266 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:39.528 256+0 records in 00:15:39.528 256+0 records out 00:15:39.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.277525 s, 3.8 MB/s 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:39.528 13:28:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:39.528 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:39.528 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:39.528 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:39.528 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:39.528 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:39.528 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:39.528 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.528 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:39.788 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.788 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.788 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.788 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.788 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.788 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.788 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.788 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.788 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.788 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:40.046 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:40.046 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:40.046 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:40.046 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.046 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.046 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:40.046 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.046 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.046 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.046 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.304 13:28:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:40.562 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:40.562 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:40.562 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:40.562 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.562 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.562 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:40.562 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.562 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.562 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.562 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.821 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:41.081 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:41.339 malloc_lvol_verify 00:15:41.339 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:41.596 c8469592-c33c-4867-b38f-3d2e4acef341 00:15:41.596 13:28:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:41.596 aa01ce0d-3ca8-4707-85b9-3f878f8a6ab3 00:15:41.596 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:41.854 /dev/nbd0 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:41.854 Discarding device blocks: 0/4096mke2fs 1.47.0 (5-Feb-2023) 00:15:41.854  done 00:15:41.854 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:41.854 00:15:41.854 Allocating group tables: 0/1 done 00:15:41.854 Writing inode tables: 0/1 done 00:15:41.854 Creating journal (1024 blocks): done 00:15:41.854 Writing superblocks and filesystem accounting information: 0/1 done 00:15:41.854 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.854 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72360 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72360 ']' 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72360 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72360 00:15:42.113 killing process with pid 72360 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72360' 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72360 00:15:42.113 13:28:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72360 00:15:42.682 ************************************ 00:15:42.682 END TEST bdev_nbd 00:15:42.682 ************************************ 00:15:42.682 13:28:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:42.682 00:15:42.682 real 0m10.774s 00:15:42.682 user 0m14.333s 00:15:42.682 sys 0m3.788s 00:15:42.682 13:28:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.682 13:28:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:42.682 13:28:31 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:42.682 13:28:31 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:42.944 13:28:31 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:42.944 13:28:31 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:42.944 13:28:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.944 13:28:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.944 13:28:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:42.944 ************************************ 00:15:42.944 START TEST bdev_fio 00:15:42.944 ************************************ 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:42.944 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.944 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:42.945 ************************************ 00:15:42.945 START TEST bdev_fio_rw_verify 00:15:42.945 ************************************ 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.945 13:28:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:43.205 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.205 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.206 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.206 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.206 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.206 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:43.206 fio-3.35 00:15:43.206 Starting 6 threads 00:15:55.450 00:15:55.450 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72770: Tue Nov 26 13:28:42 2024 00:15:55.450 read: IOPS=14.3k, BW=55.7MiB/s (58.4MB/s)(557MiB/10002msec) 00:15:55.450 slat (usec): min=2, max=1734, avg= 6.57, stdev=16.57 00:15:55.450 clat (usec): min=73, max=7469, avg=1359.59, stdev=770.47 00:15:55.450 lat (usec): min=89, max=7490, avg=1366.16, stdev=771.09 00:15:55.450 clat percentiles (usec): 00:15:55.450 | 50.000th=[ 1270], 99.000th=[ 3752], 99.900th=[ 5211], 99.990th=[ 6587], 00:15:55.450 | 99.999th=[ 7439] 00:15:55.450 write: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(571MiB/10002msec); 0 zone resets 00:15:55.450 slat (usec): min=10, max=5409, avg=42.24, stdev=145.28 00:15:55.450 clat (usec): min=85, max=9761, avg=1625.63, stdev=826.24 00:15:55.450 lat (usec): min=104, max=9779, avg=1667.88, stdev=839.36 00:15:55.450 clat percentiles (usec): 00:15:55.450 | 50.000th=[ 1500], 99.000th=[ 4178], 99.900th=[ 5932], 99.990th=[ 6915], 00:15:55.450 | 99.999th=[ 9765] 00:15:55.450 bw ( KiB/s): min=48346, max=74936, per=99.20%, avg=57992.16, stdev=1281.20, samples=114 00:15:55.450 iops : min=12084, max=18734, avg=14497.11, stdev=320.34, samples=114 00:15:55.450 lat (usec) : 100=0.01%, 250=1.97%, 500=6.18%, 750=8.86%, 1000=11.75% 00:15:55.450 lat (msec) : 2=49.01%, 4=21.22%, 10=1.00% 00:15:55.450 cpu : usr=43.38%, sys=32.40%, ctx=5559, majf=0, minf=14595 00:15:55.450 IO depths : 1=11.4%, 2=23.8%, 4=51.1%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.450 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.450 issued rwts: total=142639,146173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.450 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:55.450 00:15:55.450 Run status group 0 (all jobs): 00:15:55.450 READ: bw=55.7MiB/s (58.4MB/s), 55.7MiB/s-55.7MiB/s (58.4MB/s-58.4MB/s), io=557MiB (584MB), run=10002-10002msec 00:15:55.450 WRITE: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=571MiB (599MB), run=10002-10002msec 00:15:55.450 ----------------------------------------------------- 00:15:55.450 Suppressions used: 00:15:55.450 count bytes template 00:15:55.450 6 48 /usr/src/fio/parse.c 00:15:55.450 3444 330624 /usr/src/fio/iolog.c 00:15:55.450 1 8 libtcmalloc_minimal.so 00:15:55.450 1 904 libcrypto.so 00:15:55.450 ----------------------------------------------------- 00:15:55.450 00:15:55.450 00:15:55.450 real 0m11.935s 00:15:55.450 user 0m27.468s 00:15:55.450 sys 0m19.777s 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:55.450 ************************************ 00:15:55.450 END TEST bdev_fio_rw_verify 00:15:55.450 ************************************ 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:55.450 13:28:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:55.451 13:28:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "658645d7-68fa-41bc-ae93-192d0fb33155"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "658645d7-68fa-41bc-ae93-192d0fb33155",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "022f3660-ca86-47f2-8071-86feee243bd6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "022f3660-ca86-47f2-8071-86feee243bd6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "7e09126b-079c-466c-92dd-c67b5e9c075c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7e09126b-079c-466c-92dd-c67b5e9c075c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "66ce568e-445d-4547-8112-1993c48c637e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "66ce568e-445d-4547-8112-1993c48c637e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a2d968a3-ec45-4099-9644-a27855bb41b9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a2d968a3-ec45-4099-9644-a27855bb41b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "182cf671-2c83-47ab-ad1c-fc0c915219c9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "182cf671-2c83-47ab-ad1c-fc0c915219c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:55.451 13:28:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:55.451 13:28:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:55.451 /home/vagrant/spdk_repo/spdk 00:15:55.451 13:28:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:55.451 13:28:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:55.451 13:28:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:55.451 00:15:55.451 real 0m12.116s 00:15:55.451 user 0m27.536s 00:15:55.451 sys 0m19.857s 00:15:55.451 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.451 ************************************ 00:15:55.451 END TEST bdev_fio 00:15:55.451 ************************************ 00:15:55.451 13:28:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:55.451 13:28:43 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:55.451 13:28:43 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:55.451 13:28:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:55.451 13:28:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.451 13:28:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:55.451 ************************************ 00:15:55.451 START TEST bdev_verify 00:15:55.451 ************************************ 00:15:55.451 13:28:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:55.451 [2024-11-26 13:28:43.522785] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:15:55.451 [2024-11-26 13:28:43.522929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72939 ] 00:15:55.451 [2024-11-26 13:28:43.688360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:55.451 [2024-11-26 13:28:43.814398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.451 [2024-11-26 13:28:43.814409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.711 Running I/O for 5 seconds... 00:15:58.041 23392.00 IOPS, 91.38 MiB/s [2024-11-26T13:28:47.554Z] 22864.00 IOPS, 89.31 MiB/s [2024-11-26T13:28:48.498Z] 22752.00 IOPS, 88.88 MiB/s [2024-11-26T13:28:49.441Z] 22568.00 IOPS, 88.16 MiB/s [2024-11-26T13:28:49.441Z] 22592.00 IOPS, 88.25 MiB/s 00:16:00.871 Latency(us) 00:16:00.871 [2024-11-26T13:28:49.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.871 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0x0 length 0x80000 00:16:00.871 nvme0n1 : 5.06 1794.51 7.01 0.00 0.00 71196.02 9225.45 67754.14 00:16:00.871 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0x80000 length 0x80000 00:16:00.871 nvme0n1 : 5.02 1860.64 7.27 0.00 0.00 68662.45 7461.02 78643.20 00:16:00.871 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0x0 length 0x80000 00:16:00.871 nvme0n2 : 5.06 1746.56 6.82 0.00 0.00 73035.25 7259.37 62107.96 00:16:00.871 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0x80000 length 0x80000 00:16:00.871 nvme0n2 : 5.03 1805.32 7.05 0.00 0.00 70634.33 9124.63 75013.51 00:16:00.871 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0x0 length 0x80000 00:16:00.871 nvme0n3 : 5.07 1718.04 6.71 0.00 0.00 74123.20 10233.70 68560.74 00:16:00.871 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0x80000 length 0x80000 00:16:00.871 nvme0n3 : 5.04 1802.11 7.04 0.00 0.00 70624.35 5520.15 72997.02 00:16:00.871 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0x0 length 0x20000 00:16:00.871 nvme1n1 : 5.06 1719.74 6.72 0.00 0.00 73917.40 12098.95 64931.05 00:16:00.871 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0x20000 length 0x20000 00:16:00.871 nvme1n1 : 5.05 1825.17 7.13 0.00 0.00 69588.51 9275.86 72997.02 00:16:00.871 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0x0 length 0xa0000 00:16:00.871 nvme2n1 : 5.08 1739.31 6.79 0.00 0.00 72964.85 7511.43 71383.83 00:16:00.871 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0xa0000 length 0xa0000 00:16:00.871 nvme2n1 : 5.08 1788.14 6.98 0.00 0.00 70899.14 10384.94 76626.71 00:16:00.871 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0x0 length 0xbd0bd 00:16:00.871 nvme3n1 : 5.07 2277.19 8.90 0.00 0.00 55563.51 5066.44 71787.13 00:16:00.871 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.871 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:00.871 nvme3n1 : 5.09 2320.90 9.07 0.00 0.00 54470.21 5948.65 68157.44 00:16:00.871 [2024-11-26T13:28:49.441Z] =================================================================================================================== 00:16:00.871 [2024-11-26T13:28:49.441Z] Total : 22397.65 87.49 0.00 0.00 68120.95 5066.44 78643.20 00:16:01.813 00:16:01.813 real 0m6.744s 00:16:01.813 user 0m10.889s 00:16:01.813 sys 0m1.471s 00:16:01.813 13:28:50 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.813 13:28:50 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:01.813 ************************************ 00:16:01.813 END TEST bdev_verify 00:16:01.813 ************************************ 00:16:01.814 13:28:50 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:01.814 13:28:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:01.814 13:28:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.814 13:28:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.814 ************************************ 00:16:01.814 START TEST bdev_verify_big_io 00:16:01.814 ************************************ 00:16:01.814 13:28:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:01.814 [2024-11-26 13:28:50.339354] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:01.814 [2024-11-26 13:28:50.339512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73033 ] 00:16:02.075 [2024-11-26 13:28:50.502540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:02.075 [2024-11-26 13:28:50.626594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.075 [2024-11-26 13:28:50.626742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.646 Running I/O for 5 seconds... 00:16:08.488 504.00 IOPS, 31.50 MiB/s [2024-11-26T13:28:57.317Z] 2467.50 IOPS, 154.22 MiB/s [2024-11-26T13:28:57.317Z] 2979.33 IOPS, 186.21 MiB/s 00:16:08.747 Latency(us) 00:16:08.747 [2024-11-26T13:28:57.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.747 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0x0 length 0x8000 00:16:08.747 nvme0n1 : 5.76 111.05 6.94 0.00 0.00 1048819.55 94371.84 1045349.61 00:16:08.747 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0x8000 length 0x8000 00:16:08.747 nvme0n1 : 5.76 130.63 8.16 0.00 0.00 939902.75 9175.04 1374441.16 00:16:08.747 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0x0 length 0x8000 00:16:08.747 nvme0n2 : 5.77 91.43 5.71 0.00 0.00 1320893.93 9175.04 2361715.79 00:16:08.747 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0x8000 length 0x8000 00:16:08.747 nvme0n2 : 5.76 133.36 8.34 0.00 0.00 892084.64 152446.82 864671.90 00:16:08.747 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0x0 length 0x8000 00:16:08.747 nvme0n3 : 5.78 99.32 6.21 0.00 0.00 1187705.80 9326.28 2645637.91 00:16:08.747 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0x8000 length 0x8000 00:16:08.747 nvme0n3 : 5.79 149.17 9.32 0.00 0.00 792004.08 10586.58 693673.35 00:16:08.747 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0x0 length 0x2000 00:16:08.747 nvme1n1 : 5.88 127.82 7.99 0.00 0.00 888643.69 22483.89 1555118.87 00:16:08.747 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0x2000 length 0x2000 00:16:08.747 nvme1n1 : 5.79 129.78 8.11 0.00 0.00 879733.38 30247.38 1438968.91 00:16:08.747 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0x0 length 0xa000 00:16:08.747 nvme2n1 : 5.79 118.92 7.43 0.00 0.00 926018.84 14720.39 1651910.50 00:16:08.747 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0xa000 length 0xa000 00:16:08.747 nvme2n1 : 5.78 103.05 6.44 0.00 0.00 1071389.43 26617.70 2774693.42 00:16:08.747 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0x0 length 0xbd0b 00:16:08.747 nvme3n1 : 5.89 168.46 10.53 0.00 0.00 633355.09 2205.54 709805.29 00:16:08.747 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:08.747 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:08.747 nvme3n1 : 5.91 194.91 12.18 0.00 0.00 549662.52 2155.13 1219574.55 00:16:08.747 [2024-11-26T13:28:57.317Z] =================================================================================================================== 00:16:08.747 [2024-11-26T13:28:57.317Z] Total : 1557.89 97.37 0.00 0.00 882491.40 2155.13 2774693.42 00:16:09.685 00:16:09.685 real 0m7.705s 00:16:09.685 user 0m14.101s 00:16:09.685 sys 0m0.446s 00:16:09.685 13:28:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.685 ************************************ 00:16:09.685 13:28:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.685 END TEST bdev_verify_big_io 00:16:09.685 ************************************ 00:16:09.685 13:28:58 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:09.685 13:28:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:09.685 13:28:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.685 13:28:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:09.685 ************************************ 00:16:09.685 START TEST bdev_write_zeroes 00:16:09.685 ************************************ 00:16:09.685 13:28:58 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:09.685 [2024-11-26 13:28:58.097484] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:09.685 [2024-11-26 13:28:58.097721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73143 ] 00:16:09.946 [2024-11-26 13:28:58.272763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.946 [2024-11-26 13:28:58.388657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.517 Running I/O for 1 seconds... 00:16:11.460 83488.00 IOPS, 326.12 MiB/s 00:16:11.460 Latency(us) 00:16:11.460 [2024-11-26T13:29:00.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.460 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.460 nvme0n1 : 1.03 13436.97 52.49 0.00 0.00 9516.54 4032.98 20669.05 00:16:11.460 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.460 nvme0n2 : 1.01 13543.34 52.90 0.00 0.00 9434.37 4058.19 18350.08 00:16:11.460 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.460 nvme0n3 : 1.01 13505.62 52.76 0.00 0.00 9453.26 4083.40 18350.08 00:16:11.460 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.460 nvme1n1 : 1.03 13468.63 52.61 0.00 0.00 9472.61 4184.22 22181.42 00:16:11.460 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.460 nvme2n1 : 1.02 13319.52 52.03 0.00 0.00 9570.10 4738.76 20467.40 00:16:11.460 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.460 nvme3n1 : 1.03 15351.80 59.97 0.00 0.00 8297.14 4184.22 22282.24 00:16:11.460 [2024-11-26T13:29:00.030Z] =================================================================================================================== 00:16:11.460 [2024-11-26T13:29:00.030Z] Total : 82625.89 322.76 0.00 0.00 9266.38 4032.98 22282.24 00:16:12.403 00:16:12.403 real 0m2.658s 00:16:12.403 user 0m1.996s 00:16:12.403 sys 0m0.461s 00:16:12.403 ************************************ 00:16:12.403 END TEST bdev_write_zeroes 00:16:12.403 ************************************ 00:16:12.403 13:29:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.403 13:29:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:12.403 13:29:00 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.403 13:29:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:12.403 13:29:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.403 13:29:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.403 ************************************ 00:16:12.403 START TEST bdev_json_nonenclosed 00:16:12.403 ************************************ 00:16:12.403 13:29:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.403 [2024-11-26 13:29:00.849459] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:12.403 [2024-11-26 13:29:00.849616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73195 ] 00:16:12.665 [2024-11-26 13:29:01.014288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.665 [2024-11-26 13:29:01.152312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.665 [2024-11-26 13:29:01.152432] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:12.665 [2024-11-26 13:29:01.152476] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:12.665 [2024-11-26 13:29:01.152488] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:12.927 00:16:12.927 real 0m0.587s 00:16:12.927 user 0m0.349s 00:16:12.927 sys 0m0.132s 00:16:12.927 ************************************ 00:16:12.927 END TEST bdev_json_nonenclosed 00:16:12.927 ************************************ 00:16:12.927 13:29:01 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.927 13:29:01 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:12.927 13:29:01 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.927 13:29:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:12.927 13:29:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.927 13:29:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.927 ************************************ 00:16:12.927 START TEST bdev_json_nonarray 00:16:12.927 ************************************ 00:16:12.927 13:29:01 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:13.188 [2024-11-26 13:29:01.512652] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:13.188 [2024-11-26 13:29:01.512809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73216 ] 00:16:13.188 [2024-11-26 13:29:01.681833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.449 [2024-11-26 13:29:01.833591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.449 [2024-11-26 13:29:01.833711] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:13.449 [2024-11-26 13:29:01.833735] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:13.449 [2024-11-26 13:29:01.833747] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:13.710 00:16:13.710 real 0m0.613s 00:16:13.710 user 0m0.378s 00:16:13.710 sys 0m0.128s 00:16:13.710 13:29:02 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.710 ************************************ 00:16:13.710 END TEST bdev_json_nonarray 00:16:13.710 ************************************ 00:16:13.710 13:29:02 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:13.710 13:29:02 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:14.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:15.229 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:15.229 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:16.173 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:16.433 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:16.433 00:16:16.433 real 0m53.166s 00:16:16.433 user 1m21.845s 00:16:16.433 sys 0m33.375s 00:16:16.433 13:29:04 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.433 13:29:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.433 ************************************ 00:16:16.433 END TEST blockdev_xnvme 00:16:16.433 ************************************ 00:16:16.433 13:29:04 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:16.433 13:29:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:16.433 13:29:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.433 13:29:04 -- common/autotest_common.sh@10 -- # set +x 00:16:16.433 ************************************ 00:16:16.433 START TEST ublk 00:16:16.433 ************************************ 00:16:16.433 13:29:04 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:16.433 * Looking for test storage... 00:16:16.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:16.433 13:29:04 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:16.433 13:29:04 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:16.433 13:29:04 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:16.694 13:29:05 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:16.694 13:29:05 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.694 13:29:05 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.694 13:29:05 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.694 13:29:05 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.694 13:29:05 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.694 13:29:05 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.694 13:29:05 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.694 13:29:05 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.694 13:29:05 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.694 13:29:05 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.694 13:29:05 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.694 13:29:05 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:16.694 13:29:05 ublk -- scripts/common.sh@345 -- # : 1 00:16:16.694 13:29:05 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.694 13:29:05 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.694 13:29:05 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:16.694 13:29:05 ublk -- scripts/common.sh@353 -- # local d=1 00:16:16.694 13:29:05 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.694 13:29:05 ublk -- scripts/common.sh@355 -- # echo 1 00:16:16.694 13:29:05 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.694 13:29:05 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:16.694 13:29:05 ublk -- scripts/common.sh@353 -- # local d=2 00:16:16.694 13:29:05 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.694 13:29:05 ublk -- scripts/common.sh@355 -- # echo 2 00:16:16.694 13:29:05 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.695 13:29:05 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.695 13:29:05 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.695 13:29:05 ublk -- scripts/common.sh@368 -- # return 0 00:16:16.695 13:29:05 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.695 13:29:05 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:16.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.695 --rc genhtml_branch_coverage=1 00:16:16.695 --rc genhtml_function_coverage=1 00:16:16.695 --rc genhtml_legend=1 00:16:16.695 --rc geninfo_all_blocks=1 00:16:16.695 --rc geninfo_unexecuted_blocks=1 00:16:16.695 00:16:16.695 ' 00:16:16.695 13:29:05 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:16.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.695 --rc genhtml_branch_coverage=1 00:16:16.695 --rc genhtml_function_coverage=1 00:16:16.695 --rc genhtml_legend=1 00:16:16.695 --rc geninfo_all_blocks=1 00:16:16.695 --rc geninfo_unexecuted_blocks=1 00:16:16.695 00:16:16.695 ' 00:16:16.695 13:29:05 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:16.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.695 --rc genhtml_branch_coverage=1 00:16:16.695 --rc genhtml_function_coverage=1 00:16:16.695 --rc genhtml_legend=1 00:16:16.695 --rc geninfo_all_blocks=1 00:16:16.695 --rc geninfo_unexecuted_blocks=1 00:16:16.695 00:16:16.695 ' 00:16:16.695 13:29:05 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:16.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.695 --rc genhtml_branch_coverage=1 00:16:16.695 --rc genhtml_function_coverage=1 00:16:16.695 --rc genhtml_legend=1 00:16:16.695 --rc geninfo_all_blocks=1 00:16:16.695 --rc geninfo_unexecuted_blocks=1 00:16:16.695 00:16:16.695 ' 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:16.695 13:29:05 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:16.695 13:29:05 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:16.695 13:29:05 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:16.695 13:29:05 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:16.695 13:29:05 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:16.695 13:29:05 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:16.695 13:29:05 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:16.695 13:29:05 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:16.695 13:29:05 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:16.695 13:29:05 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:16.695 13:29:05 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.695 13:29:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:16.695 ************************************ 00:16:16.695 START TEST test_save_ublk_config 00:16:16.695 ************************************ 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73515 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73515 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73515 ']' 00:16:16.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.695 13:29:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:16.695 [2024-11-26 13:29:05.179154] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:16.695 [2024-11-26 13:29:05.179302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73515 ] 00:16:16.956 [2024-11-26 13:29:05.345558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.956 [2024-11-26 13:29:05.466692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.900 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.900 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:17.900 13:29:06 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:17.900 13:29:06 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:17.900 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.900 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:17.900 [2024-11-26 13:29:06.190481] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:17.900 [2024-11-26 13:29:06.191373] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:17.900 malloc0 00:16:17.900 [2024-11-26 13:29:06.262614] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:17.900 [2024-11-26 13:29:06.262715] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:17.900 [2024-11-26 13:29:06.262726] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:17.900 [2024-11-26 13:29:06.262734] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:17.900 [2024-11-26 13:29:06.271576] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:17.900 [2024-11-26 13:29:06.271611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:17.901 [2024-11-26 13:29:06.278485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:17.901 [2024-11-26 13:29:06.278626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:17.901 [2024-11-26 13:29:06.295484] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:17.901 0 00:16:17.901 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.901 13:29:06 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:17.901 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.901 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:18.162 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.162 13:29:06 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:18.162 "subsystems": [ 00:16:18.162 { 00:16:18.162 "subsystem": "fsdev", 00:16:18.162 "config": [ 00:16:18.162 { 00:16:18.162 "method": "fsdev_set_opts", 00:16:18.162 "params": { 00:16:18.162 "fsdev_io_pool_size": 65535, 00:16:18.162 "fsdev_io_cache_size": 256 00:16:18.162 } 00:16:18.162 } 00:16:18.162 ] 00:16:18.162 }, 00:16:18.162 { 00:16:18.162 "subsystem": "keyring", 00:16:18.162 "config": [] 00:16:18.162 }, 00:16:18.162 { 00:16:18.162 "subsystem": "iobuf", 00:16:18.162 "config": [ 00:16:18.162 { 00:16:18.162 "method": "iobuf_set_options", 00:16:18.162 "params": { 00:16:18.162 "small_pool_count": 8192, 00:16:18.162 "large_pool_count": 1024, 00:16:18.162 "small_bufsize": 8192, 00:16:18.162 "large_bufsize": 135168, 00:16:18.162 "enable_numa": false 00:16:18.162 } 00:16:18.162 } 00:16:18.162 ] 00:16:18.162 }, 00:16:18.162 { 00:16:18.162 "subsystem": "sock", 00:16:18.162 "config": [ 00:16:18.162 { 00:16:18.162 "method": "sock_set_default_impl", 00:16:18.162 "params": { 00:16:18.162 "impl_name": "posix" 00:16:18.162 } 00:16:18.162 }, 00:16:18.162 { 00:16:18.162 "method": "sock_impl_set_options", 00:16:18.162 "params": { 00:16:18.162 "impl_name": "ssl", 00:16:18.162 "recv_buf_size": 4096, 00:16:18.162 "send_buf_size": 4096, 00:16:18.162 "enable_recv_pipe": true, 00:16:18.162 "enable_quickack": false, 00:16:18.162 "enable_placement_id": 0, 00:16:18.162 "enable_zerocopy_send_server": true, 00:16:18.162 "enable_zerocopy_send_client": false, 00:16:18.162 "zerocopy_threshold": 0, 00:16:18.162 "tls_version": 0, 00:16:18.162 "enable_ktls": false 00:16:18.162 } 00:16:18.162 }, 00:16:18.162 { 00:16:18.162 "method": "sock_impl_set_options", 00:16:18.162 "params": { 00:16:18.162 "impl_name": "posix", 00:16:18.162 "recv_buf_size": 2097152, 00:16:18.162 "send_buf_size": 2097152, 00:16:18.162 "enable_recv_pipe": true, 00:16:18.162 "enable_quickack": false, 00:16:18.162 "enable_placement_id": 0, 00:16:18.162 "enable_zerocopy_send_server": true, 00:16:18.162 "enable_zerocopy_send_client": false, 00:16:18.162 "zerocopy_threshold": 0, 00:16:18.162 "tls_version": 0, 00:16:18.162 "enable_ktls": false 00:16:18.162 } 00:16:18.162 } 00:16:18.162 ] 00:16:18.162 }, 00:16:18.162 { 00:16:18.162 "subsystem": "vmd", 00:16:18.162 "config": [] 00:16:18.162 }, 00:16:18.162 { 00:16:18.162 "subsystem": "accel", 00:16:18.162 "config": [ 00:16:18.162 { 00:16:18.162 "method": "accel_set_options", 00:16:18.162 "params": { 00:16:18.162 "small_cache_size": 128, 00:16:18.162 "large_cache_size": 16, 00:16:18.162 "task_count": 2048, 00:16:18.162 "sequence_count": 2048, 00:16:18.162 "buf_count": 2048 00:16:18.162 } 00:16:18.162 } 00:16:18.162 ] 00:16:18.162 }, 00:16:18.162 { 00:16:18.162 "subsystem": "bdev", 00:16:18.162 "config": [ 00:16:18.162 { 00:16:18.162 "method": "bdev_set_options", 00:16:18.162 "params": { 00:16:18.162 "bdev_io_pool_size": 65535, 00:16:18.162 "bdev_io_cache_size": 256, 00:16:18.162 "bdev_auto_examine": true, 00:16:18.162 "iobuf_small_cache_size": 128, 00:16:18.162 "iobuf_large_cache_size": 16 00:16:18.162 } 00:16:18.162 }, 00:16:18.162 { 00:16:18.162 "method": "bdev_raid_set_options", 00:16:18.162 "params": { 00:16:18.163 "process_window_size_kb": 1024, 00:16:18.163 "process_max_bandwidth_mb_sec": 0 00:16:18.163 } 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "method": "bdev_iscsi_set_options", 00:16:18.163 "params": { 00:16:18.163 "timeout_sec": 30 00:16:18.163 } 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "method": "bdev_nvme_set_options", 00:16:18.163 "params": { 00:16:18.163 "action_on_timeout": "none", 00:16:18.163 "timeout_us": 0, 00:16:18.163 "timeout_admin_us": 0, 00:16:18.163 "keep_alive_timeout_ms": 10000, 00:16:18.163 "arbitration_burst": 0, 00:16:18.163 "low_priority_weight": 0, 00:16:18.163 "medium_priority_weight": 0, 00:16:18.163 "high_priority_weight": 0, 00:16:18.163 "nvme_adminq_poll_period_us": 10000, 00:16:18.163 "nvme_ioq_poll_period_us": 0, 00:16:18.163 "io_queue_requests": 0, 00:16:18.163 "delay_cmd_submit": true, 00:16:18.163 "transport_retry_count": 4, 00:16:18.163 "bdev_retry_count": 3, 00:16:18.163 "transport_ack_timeout": 0, 00:16:18.163 "ctrlr_loss_timeout_sec": 0, 00:16:18.163 "reconnect_delay_sec": 0, 00:16:18.163 "fast_io_fail_timeout_sec": 0, 00:16:18.163 "disable_auto_failback": false, 00:16:18.163 "generate_uuids": false, 00:16:18.163 "transport_tos": 0, 00:16:18.163 "nvme_error_stat": false, 00:16:18.163 "rdma_srq_size": 0, 00:16:18.163 "io_path_stat": false, 00:16:18.163 "allow_accel_sequence": false, 00:16:18.163 "rdma_max_cq_size": 0, 00:16:18.163 "rdma_cm_event_timeout_ms": 0, 00:16:18.163 "dhchap_digests": [ 00:16:18.163 "sha256", 00:16:18.163 "sha384", 00:16:18.163 "sha512" 00:16:18.163 ], 00:16:18.163 "dhchap_dhgroups": [ 00:16:18.163 "null", 00:16:18.163 "ffdhe2048", 00:16:18.163 "ffdhe3072", 00:16:18.163 "ffdhe4096", 00:16:18.163 "ffdhe6144", 00:16:18.163 "ffdhe8192" 00:16:18.163 ] 00:16:18.163 } 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "method": "bdev_nvme_set_hotplug", 00:16:18.163 "params": { 00:16:18.163 "period_us": 100000, 00:16:18.163 "enable": false 00:16:18.163 } 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "method": "bdev_malloc_create", 00:16:18.163 "params": { 00:16:18.163 "name": "malloc0", 00:16:18.163 "num_blocks": 8192, 00:16:18.163 "block_size": 4096, 00:16:18.163 "physical_block_size": 4096, 00:16:18.163 "uuid": "a92db7d0-35db-40df-b032-3c77ea04f7f3", 00:16:18.163 "optimal_io_boundary": 0, 00:16:18.163 "md_size": 0, 00:16:18.163 "dif_type": 0, 00:16:18.163 "dif_is_head_of_md": false, 00:16:18.163 "dif_pi_format": 0 00:16:18.163 } 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "method": "bdev_wait_for_examine" 00:16:18.163 } 00:16:18.163 ] 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "subsystem": "scsi", 00:16:18.163 "config": null 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "subsystem": "scheduler", 00:16:18.163 "config": [ 00:16:18.163 { 00:16:18.163 "method": "framework_set_scheduler", 00:16:18.163 "params": { 00:16:18.163 "name": "static" 00:16:18.163 } 00:16:18.163 } 00:16:18.163 ] 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "subsystem": "vhost_scsi", 00:16:18.163 "config": [] 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "subsystem": "vhost_blk", 00:16:18.163 "config": [] 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "subsystem": "ublk", 00:16:18.163 "config": [ 00:16:18.163 { 00:16:18.163 "method": "ublk_create_target", 00:16:18.163 "params": { 00:16:18.163 "cpumask": "1" 00:16:18.163 } 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "method": "ublk_start_disk", 00:16:18.163 "params": { 00:16:18.163 "bdev_name": "malloc0", 00:16:18.163 "ublk_id": 0, 00:16:18.163 "num_queues": 1, 00:16:18.163 "queue_depth": 128 00:16:18.163 } 00:16:18.163 } 00:16:18.163 ] 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "subsystem": "nbd", 00:16:18.163 "config": [] 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "subsystem": "nvmf", 00:16:18.163 "config": [ 00:16:18.163 { 00:16:18.163 "method": "nvmf_set_config", 00:16:18.163 "params": { 00:16:18.163 "discovery_filter": "match_any", 00:16:18.163 "admin_cmd_passthru": { 00:16:18.163 "identify_ctrlr": false 00:16:18.163 }, 00:16:18.163 "dhchap_digests": [ 00:16:18.163 "sha256", 00:16:18.163 "sha384", 00:16:18.163 "sha512" 00:16:18.163 ], 00:16:18.163 "dhchap_dhgroups": [ 00:16:18.163 "null", 00:16:18.163 "ffdhe2048", 00:16:18.163 "ffdhe3072", 00:16:18.163 "ffdhe4096", 00:16:18.163 "ffdhe6144", 00:16:18.163 "ffdhe8192" 00:16:18.163 ] 00:16:18.163 } 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "method": "nvmf_set_max_subsystems", 00:16:18.163 "params": { 00:16:18.163 "max_subsystems": 1024 00:16:18.163 } 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "method": "nvmf_set_crdt", 00:16:18.163 "params": { 00:16:18.163 "crdt1": 0, 00:16:18.163 "crdt2": 0, 00:16:18.163 "crdt3": 0 00:16:18.163 } 00:16:18.163 } 00:16:18.163 ] 00:16:18.163 }, 00:16:18.163 { 00:16:18.163 "subsystem": "iscsi", 00:16:18.163 "config": [ 00:16:18.163 { 00:16:18.163 "method": "iscsi_set_options", 00:16:18.163 "params": { 00:16:18.163 "node_base": "iqn.2016-06.io.spdk", 00:16:18.163 "max_sessions": 128, 00:16:18.163 "max_connections_per_session": 2, 00:16:18.163 "max_queue_depth": 64, 00:16:18.163 "default_time2wait": 2, 00:16:18.163 "default_time2retain": 20, 00:16:18.163 "first_burst_length": 8192, 00:16:18.163 "immediate_data": true, 00:16:18.163 "allow_duplicated_isid": false, 00:16:18.163 "error_recovery_level": 0, 00:16:18.163 "nop_timeout": 60, 00:16:18.163 "nop_in_interval": 30, 00:16:18.163 "disable_chap": false, 00:16:18.163 "require_chap": false, 00:16:18.163 "mutual_chap": false, 00:16:18.163 "chap_group": 0, 00:16:18.163 "max_large_datain_per_connection": 64, 00:16:18.163 "max_r2t_per_connection": 4, 00:16:18.163 "pdu_pool_size": 36864, 00:16:18.163 "immediate_data_pool_size": 16384, 00:16:18.163 "data_out_pool_size": 2048 00:16:18.163 } 00:16:18.163 } 00:16:18.163 ] 00:16:18.163 } 00:16:18.163 ] 00:16:18.163 }' 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73515 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73515 ']' 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73515 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73515 00:16:18.163 killing process with pid 73515 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73515' 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73515 00:16:18.163 13:29:06 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73515 00:16:19.549 [2024-11-26 13:29:08.083987] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:19.810 [2024-11-26 13:29:08.122499] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:19.810 [2024-11-26 13:29:08.122614] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:19.810 [2024-11-26 13:29:08.130480] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:19.810 [2024-11-26 13:29:08.130530] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:19.810 [2024-11-26 13:29:08.130541] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:19.810 [2024-11-26 13:29:08.130566] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:19.810 [2024-11-26 13:29:08.130693] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:20.754 13:29:09 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73577 00:16:20.754 13:29:09 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73577 00:16:20.754 13:29:09 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73577 ']' 00:16:20.754 13:29:09 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:20.754 13:29:09 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.754 13:29:09 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.754 13:29:09 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.754 13:29:09 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.754 13:29:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:20.754 13:29:09 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:20.754 "subsystems": [ 00:16:20.754 { 00:16:20.754 "subsystem": "fsdev", 00:16:20.754 "config": [ 00:16:20.754 { 00:16:20.754 "method": "fsdev_set_opts", 00:16:20.754 "params": { 00:16:20.754 "fsdev_io_pool_size": 65535, 00:16:20.754 "fsdev_io_cache_size": 256 00:16:20.754 } 00:16:20.754 } 00:16:20.754 ] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "keyring", 00:16:20.754 "config": [] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "iobuf", 00:16:20.754 "config": [ 00:16:20.754 { 00:16:20.754 "method": "iobuf_set_options", 00:16:20.754 "params": { 00:16:20.754 "small_pool_count": 8192, 00:16:20.754 "large_pool_count": 1024, 00:16:20.754 "small_bufsize": 8192, 00:16:20.754 "large_bufsize": 135168, 00:16:20.754 "enable_numa": false 00:16:20.754 } 00:16:20.754 } 00:16:20.754 ] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "sock", 00:16:20.754 "config": [ 00:16:20.754 { 00:16:20.754 "method": "sock_set_default_impl", 00:16:20.754 "params": { 00:16:20.754 "impl_name": "posix" 00:16:20.754 } 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "method": "sock_impl_set_options", 00:16:20.754 "params": { 00:16:20.754 "impl_name": "ssl", 00:16:20.754 "recv_buf_size": 4096, 00:16:20.754 "send_buf_size": 4096, 00:16:20.754 "enable_recv_pipe": true, 00:16:20.754 "enable_quickack": false, 00:16:20.754 "enable_placement_id": 0, 00:16:20.754 "enable_zerocopy_send_server": true, 00:16:20.754 "enable_zerocopy_send_client": false, 00:16:20.754 "zerocopy_threshold": 0, 00:16:20.754 "tls_version": 0, 00:16:20.754 "enable_ktls": false 00:16:20.754 } 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "method": "sock_impl_set_options", 00:16:20.754 "params": { 00:16:20.754 "impl_name": "posix", 00:16:20.754 "recv_buf_size": 2097152, 00:16:20.754 "send_buf_size": 2097152, 00:16:20.754 "enable_recv_pipe": true, 00:16:20.754 "enable_quickack": false, 00:16:20.754 "enable_placement_id": 0, 00:16:20.754 "enable_zerocopy_send_server": true, 00:16:20.754 "enable_zerocopy_send_client": false, 00:16:20.754 "zerocopy_threshold": 0, 00:16:20.754 "tls_version": 0, 00:16:20.754 "enable_ktls": false 00:16:20.754 } 00:16:20.754 } 00:16:20.754 ] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "vmd", 00:16:20.754 "config": [] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "accel", 00:16:20.754 "config": [ 00:16:20.754 { 00:16:20.754 "method": "accel_set_options", 00:16:20.754 "params": { 00:16:20.754 "small_cache_size": 128, 00:16:20.754 "large_cache_size": 16, 00:16:20.754 "task_count": 2048, 00:16:20.754 "sequence_count": 2048, 00:16:20.754 "buf_count": 2048 00:16:20.754 } 00:16:20.754 } 00:16:20.754 ] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "bdev", 00:16:20.754 "config": [ 00:16:20.754 { 00:16:20.754 "method": "bdev_set_options", 00:16:20.754 "params": { 00:16:20.754 "bdev_io_pool_size": 65535, 00:16:20.754 "bdev_io_cache_size": 256, 00:16:20.754 "bdev_auto_examine": true, 00:16:20.754 "iobuf_small_cache_size": 128, 00:16:20.754 "iobuf_large_cache_size": 16 00:16:20.754 } 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "method": "bdev_raid_set_options", 00:16:20.754 "params": { 00:16:20.754 "process_window_size_kb": 1024, 00:16:20.754 "process_max_bandwidth_mb_sec": 0 00:16:20.754 } 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "method": "bdev_iscsi_set_options", 00:16:20.754 "params": { 00:16:20.754 "timeout_sec": 30 00:16:20.754 } 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "method": "bdev_nvme_set_options", 00:16:20.754 "params": { 00:16:20.754 "action_on_timeout": "none", 00:16:20.754 "timeout_us": 0, 00:16:20.754 "timeout_admin_us": 0, 00:16:20.754 "keep_alive_timeout_ms": 10000, 00:16:20.754 "arbitration_burst": 0, 00:16:20.754 "low_priority_weight": 0, 00:16:20.754 "medium_priority_weight": 0, 00:16:20.754 "high_priority_weight": 0, 00:16:20.754 "nvme_adminq_poll_period_us": 10000, 00:16:20.754 "nvme_ioq_poll_period_us": 0, 00:16:20.754 "io_queue_requests": 0, 00:16:20.754 "delay_cmd_submit": true, 00:16:20.754 "transport_retry_count": 4, 00:16:20.754 "bdev_retry_count": 3, 00:16:20.754 "transport_ack_timeout": 0, 00:16:20.754 "ctrlr_loss_timeout_sec": 0, 00:16:20.754 "reconnect_delay_sec": 0, 00:16:20.754 "fast_io_fail_timeout_sec": 0, 00:16:20.754 "disable_auto_failback": false, 00:16:20.754 "generate_uuids": false, 00:16:20.754 "transport_tos": 0, 00:16:20.754 "nvme_error_stat": false, 00:16:20.754 "rdma_srq_size": 0, 00:16:20.754 "io_path_stat": false, 00:16:20.754 "allow_accel_sequence": false, 00:16:20.754 "rdma_max_cq_size": 0, 00:16:20.754 "rdma_cm_event_timeout_ms": 0, 00:16:20.754 "dhchap_digests": [ 00:16:20.754 "sha256", 00:16:20.754 "sha384", 00:16:20.754 "sha512" 00:16:20.754 ], 00:16:20.754 "dhchap_dhgroups": [ 00:16:20.754 "null", 00:16:20.754 "ffdhe2048", 00:16:20.754 "ffdhe3072", 00:16:20.754 "ffdhe4096", 00:16:20.754 "ffdhe6144", 00:16:20.754 "ffdhe8192" 00:16:20.754 ] 00:16:20.754 } 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "method": "bdev_nvme_set_hotplug", 00:16:20.754 "params": { 00:16:20.754 "period_us": 100000, 00:16:20.754 "enable": false 00:16:20.754 } 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "method": "bdev_malloc_create", 00:16:20.754 "params": { 00:16:20.754 "name": "malloc0", 00:16:20.754 "num_blocks": 8192, 00:16:20.754 "block_size": 4096, 00:16:20.754 "physical_block_size": 4096, 00:16:20.754 "uuid": "a92db7d0-35db-40df-b032-3c77ea04f7f3", 00:16:20.754 "optimal_io_boundary": 0, 00:16:20.754 "md_size": 0, 00:16:20.754 "dif_type": 0, 00:16:20.754 "dif_is_head_of_md": false, 00:16:20.754 "dif_pi_format": 0 00:16:20.754 } 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "method": "bdev_wait_for_examine" 00:16:20.754 } 00:16:20.754 ] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "scsi", 00:16:20.754 "config": null 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "scheduler", 00:16:20.754 "config": [ 00:16:20.754 { 00:16:20.754 "method": "framework_set_scheduler", 00:16:20.754 "params": { 00:16:20.754 "name": "static" 00:16:20.754 } 00:16:20.754 } 00:16:20.754 ] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "vhost_scsi", 00:16:20.754 "config": [] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "vhost_blk", 00:16:20.754 "config": [] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "ublk", 00:16:20.754 "config": [ 00:16:20.754 { 00:16:20.754 "method": "ublk_create_target", 00:16:20.754 "params": { 00:16:20.754 "cpumask": "1" 00:16:20.754 } 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "method": "ublk_start_disk", 00:16:20.754 "params": { 00:16:20.754 "bdev_name": "malloc0", 00:16:20.754 "ublk_id": 0, 00:16:20.754 "num_queues": 1, 00:16:20.754 "queue_depth": 128 00:16:20.754 } 00:16:20.754 } 00:16:20.754 ] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "nbd", 00:16:20.754 "config": [] 00:16:20.754 }, 00:16:20.754 { 00:16:20.754 "subsystem": "nvmf", 00:16:20.754 "config": [ 00:16:20.754 { 00:16:20.754 "method": "nvmf_set_config", 00:16:20.754 "params": { 00:16:20.754 "discovery_filter": "match_any", 00:16:20.754 "admin_cmd_passthru": { 00:16:20.754 "identify_ctrlr": false 00:16:20.754 }, 00:16:20.754 "dhchap_digests": [ 00:16:20.754 "sha256", 00:16:20.754 "sha384", 00:16:20.754 "sha512" 00:16:20.754 ], 00:16:20.754 "dhchap_dhgroups": [ 00:16:20.754 "null", 00:16:20.755 "ffdhe2048", 00:16:20.755 "ffdhe3072", 00:16:20.755 "ffdhe4096", 00:16:20.755 "ffdhe6144", 00:16:20.755 "ffdhe8192" 00:16:20.755 ] 00:16:20.755 } 00:16:20.755 }, 00:16:20.755 { 00:16:20.755 "method": "nvmf_set_max_subsystems", 00:16:20.755 "params": { 00:16:20.755 "max_subsystems": 1024 00:16:20.755 } 00:16:20.755 }, 00:16:20.755 { 00:16:20.755 "method": "nvmf_set_crdt", 00:16:20.755 "params": { 00:16:20.755 "crdt1": 0, 00:16:20.755 "crdt2": 0, 00:16:20.755 "crdt3": 0 00:16:20.755 } 00:16:20.755 } 00:16:20.755 ] 00:16:20.755 }, 00:16:20.755 { 00:16:20.755 "subsystem": "iscsi", 00:16:20.755 "config": [ 00:16:20.755 { 00:16:20.755 "method": "iscsi_set_options", 00:16:20.755 "params": { 00:16:20.755 "node_base": "iqn.2016-06.io.spdk", 00:16:20.755 "max_sessions": 128, 00:16:20.755 "max_connections_per_session": 2, 00:16:20.755 "max_queue_depth": 64, 00:16:20.755 "default_time2wait": 2, 00:16:20.755 "default_time2retain": 20, 00:16:20.755 "first_burst_length": 8192, 00:16:20.755 "immediate_data": true, 00:16:20.755 "allow_duplicated_isid": false, 00:16:20.755 "error_recovery_level": 0, 00:16:20.755 "nop_timeout": 60, 00:16:20.755 "nop_in_interval": 30, 00:16:20.755 "disable_chap": false, 00:16:20.755 "require_chap": false, 00:16:20.755 "mutual_chap": false, 00:16:20.755 "chap_group": 0, 00:16:20.755 "max_large_datain_per_connection": 64, 00:16:20.755 "max_r2t_per_connection": 4, 00:16:20.755 "pdu_pool_size": 36864, 00:16:20.755 "immediate_data_pool_size": 16384, 00:16:20.755 "data_out_pool_size": 2048 00:16:20.755 } 00:16:20.755 } 00:16:20.755 ] 00:16:20.755 } 00:16:20.755 ] 00:16:20.755 }' 00:16:21.016 [2024-11-26 13:29:09.370620] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:21.016 [2024-11-26 13:29:09.370910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73577 ] 00:16:21.016 [2024-11-26 13:29:09.528094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.285 [2024-11-26 13:29:09.613862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.857 [2024-11-26 13:29:10.262457] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:21.857 [2024-11-26 13:29:10.263089] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:21.857 [2024-11-26 13:29:10.270547] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:21.857 [2024-11-26 13:29:10.270608] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:21.857 [2024-11-26 13:29:10.270615] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:21.857 [2024-11-26 13:29:10.270620] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:21.857 [2024-11-26 13:29:10.279517] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:21.857 [2024-11-26 13:29:10.279535] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:21.857 [2024-11-26 13:29:10.286463] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:21.857 [2024-11-26 13:29:10.286535] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:21.857 [2024-11-26 13:29:10.303456] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73577 00:16:21.857 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73577 ']' 00:16:21.858 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73577 00:16:21.858 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:21.858 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.858 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73577 00:16:21.858 killing process with pid 73577 00:16:21.858 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.858 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.858 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73577' 00:16:21.858 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73577 00:16:21.858 13:29:10 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73577 00:16:23.243 [2024-11-26 13:29:11.392899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:23.243 [2024-11-26 13:29:11.423477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:23.243 [2024-11-26 13:29:11.423573] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:23.243 [2024-11-26 13:29:11.431464] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:23.243 [2024-11-26 13:29:11.431504] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:23.243 [2024-11-26 13:29:11.431510] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:23.243 [2024-11-26 13:29:11.431529] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:23.243 [2024-11-26 13:29:11.431637] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:24.187 13:29:12 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:24.187 ************************************ 00:16:24.187 END TEST test_save_ublk_config 00:16:24.187 ************************************ 00:16:24.187 00:16:24.187 real 0m7.522s 00:16:24.187 user 0m4.927s 00:16:24.187 sys 0m3.214s 00:16:24.187 13:29:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.187 13:29:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 13:29:12 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73645 00:16:24.187 13:29:12 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:24.187 13:29:12 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73645 00:16:24.187 13:29:12 ublk -- common/autotest_common.sh@835 -- # '[' -z 73645 ']' 00:16:24.187 13:29:12 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.187 13:29:12 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.187 13:29:12 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.187 13:29:12 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.187 13:29:12 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:24.187 13:29:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.187 [2024-11-26 13:29:12.726666] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:24.187 [2024-11-26 13:29:12.726813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73645 ] 00:16:24.448 [2024-11-26 13:29:12.889599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:24.709 [2024-11-26 13:29:13.015201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.709 [2024-11-26 13:29:13.015282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.280 13:29:13 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.280 13:29:13 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:25.280 13:29:13 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:25.280 13:29:13 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:25.280 13:29:13 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.280 13:29:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.280 ************************************ 00:16:25.280 START TEST test_create_ublk 00:16:25.280 ************************************ 00:16:25.280 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:25.280 13:29:13 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:25.280 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.280 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.280 [2024-11-26 13:29:13.721469] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:25.280 [2024-11-26 13:29:13.723834] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:25.280 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.280 13:29:13 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:25.280 13:29:13 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:25.280 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.280 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.542 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.542 13:29:13 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:25.542 13:29:13 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:25.542 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.542 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.542 [2024-11-26 13:29:13.945624] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:25.542 [2024-11-26 13:29:13.946075] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:25.542 [2024-11-26 13:29:13.946108] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:25.542 [2024-11-26 13:29:13.946117] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:25.542 [2024-11-26 13:29:13.954774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:25.542 [2024-11-26 13:29:13.954809] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:25.542 [2024-11-26 13:29:13.961482] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:25.542 [2024-11-26 13:29:13.970537] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:25.542 [2024-11-26 13:29:13.984588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:25.542 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.542 13:29:13 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:25.542 13:29:13 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:25.542 13:29:13 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:25.542 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.542 13:29:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.542 13:29:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.542 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:25.542 { 00:16:25.542 "ublk_device": "/dev/ublkb0", 00:16:25.542 "id": 0, 00:16:25.542 "queue_depth": 512, 00:16:25.542 "num_queues": 4, 00:16:25.542 "bdev_name": "Malloc0" 00:16:25.542 } 00:16:25.542 ]' 00:16:25.542 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:25.542 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:25.542 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:25.542 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:25.542 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:25.804 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:25.804 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:25.804 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:25.804 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:25.804 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:25.804 13:29:14 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:25.804 13:29:14 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:25.804 fio: verification read phase will never start because write phase uses all of runtime 00:16:25.804 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:25.804 fio-3.35 00:16:25.804 Starting 1 process 00:16:38.044 00:16:38.044 fio_test: (groupid=0, jobs=1): err= 0: pid=73690: Tue Nov 26 13:29:24 2024 00:16:38.044 write: IOPS=19.0k, BW=74.1MiB/s (77.6MB/s)(741MiB/10001msec); 0 zone resets 00:16:38.044 clat (usec): min=35, max=8572, avg=52.05, stdev=113.06 00:16:38.044 lat (usec): min=35, max=8589, avg=52.44, stdev=113.08 00:16:38.044 clat percentiles (usec): 00:16:38.044 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 43], 00:16:38.044 | 30.00th=[ 44], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 48], 00:16:38.044 | 70.00th=[ 50], 80.00th=[ 51], 90.00th=[ 55], 95.00th=[ 61], 00:16:38.044 | 99.00th=[ 72], 99.50th=[ 79], 99.90th=[ 2311], 99.95th=[ 3228], 00:16:38.044 | 99.99th=[ 4015] 00:16:38.044 bw ( KiB/s): min=25424, max=82504, per=99.75%, avg=75640.84, stdev=12862.25, samples=19 00:16:38.044 iops : min= 6356, max=20626, avg=18910.21, stdev=3215.56, samples=19 00:16:38.044 lat (usec) : 50=73.81%, 100=25.88%, 250=0.11%, 500=0.02%, 750=0.01% 00:16:38.044 lat (usec) : 1000=0.01% 00:16:38.044 lat (msec) : 2=0.04%, 4=0.10%, 10=0.01% 00:16:38.044 cpu : usr=2.70%, sys=13.59%, ctx=189588, majf=0, minf=795 00:16:38.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:38.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.044 issued rwts: total=0,189590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:38.044 00:16:38.044 Run status group 0 (all jobs): 00:16:38.044 WRITE: bw=74.1MiB/s (77.6MB/s), 74.1MiB/s-74.1MiB/s (77.6MB/s-77.6MB/s), io=741MiB (777MB), run=10001-10001msec 00:16:38.044 00:16:38.044 Disk stats (read/write): 00:16:38.044 ublkb0: ios=0/187483, merge=0/0, ticks=0/8396, in_queue=8397, util=99.09% 00:16:38.044 13:29:24 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.044 [2024-11-26 13:29:24.424767] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:38.044 [2024-11-26 13:29:24.468490] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:38.044 [2024-11-26 13:29:24.469083] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:38.044 [2024-11-26 13:29:24.477456] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:38.044 [2024-11-26 13:29:24.477704] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:38.044 [2024-11-26 13:29:24.477716] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.044 13:29:24 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.044 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.044 [2024-11-26 13:29:24.493519] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:38.044 request: 00:16:38.045 { 00:16:38.045 "ublk_id": 0, 00:16:38.045 "method": "ublk_stop_disk", 00:16:38.045 "req_id": 1 00:16:38.045 } 00:16:38.045 Got JSON-RPC error response 00:16:38.045 response: 00:16:38.045 { 00:16:38.045 "code": -19, 00:16:38.045 "message": "No such device" 00:16:38.045 } 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:38.045 13:29:24 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 [2024-11-26 13:29:24.509517] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:38.045 [2024-11-26 13:29:24.513041] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:38.045 [2024-11-26 13:29:24.513075] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:24 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:24 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:38.045 13:29:24 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:24 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:38.045 13:29:24 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:38.045 13:29:24 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:38.045 13:29:24 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 13:29:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:24 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:38.045 13:29:24 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:38.045 13:29:25 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:38.045 00:16:38.045 real 0m11.310s 00:16:38.045 user 0m0.581s 00:16:38.045 sys 0m1.442s 00:16:38.045 13:29:25 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.045 ************************************ 00:16:38.045 END TEST test_create_ublk 00:16:38.045 ************************************ 00:16:38.045 13:29:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 13:29:25 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:38.045 13:29:25 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:38.045 13:29:25 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.045 13:29:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 ************************************ 00:16:38.045 START TEST test_create_multi_ublk 00:16:38.045 ************************************ 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 [2024-11-26 13:29:25.073462] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:38.045 [2024-11-26 13:29:25.075542] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 [2024-11-26 13:29:25.361845] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:38.045 [2024-11-26 13:29:25.362262] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:38.045 [2024-11-26 13:29:25.362273] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:38.045 [2024-11-26 13:29:25.362284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:38.045 [2024-11-26 13:29:25.373496] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:38.045 [2024-11-26 13:29:25.373523] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:38.045 [2024-11-26 13:29:25.385474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:38.045 [2024-11-26 13:29:25.386122] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:38.045 [2024-11-26 13:29:25.425467] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 [2024-11-26 13:29:25.737588] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:38.045 [2024-11-26 13:29:25.737979] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:38.045 [2024-11-26 13:29:25.737996] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:38.045 [2024-11-26 13:29:25.738003] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:38.045 [2024-11-26 13:29:25.749490] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:38.045 [2024-11-26 13:29:25.749512] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:38.045 [2024-11-26 13:29:25.761482] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:38.045 [2024-11-26 13:29:25.762130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:38.045 [2024-11-26 13:29:25.797476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 [2024-11-26 13:29:26.109570] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:38.045 [2024-11-26 13:29:26.109969] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:38.045 [2024-11-26 13:29:26.109984] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:38.045 [2024-11-26 13:29:26.109992] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:38.045 [2024-11-26 13:29:26.121485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:38.045 [2024-11-26 13:29:26.121509] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:38.045 [2024-11-26 13:29:26.131476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:38.045 [2024-11-26 13:29:26.132139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:38.045 [2024-11-26 13:29:26.169472] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:38.045 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.046 [2024-11-26 13:29:26.445619] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:38.046 [2024-11-26 13:29:26.446003] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:38.046 [2024-11-26 13:29:26.446013] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:38.046 [2024-11-26 13:29:26.446020] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:38.046 [2024-11-26 13:29:26.453489] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:38.046 [2024-11-26 13:29:26.453510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:38.046 [2024-11-26 13:29:26.461476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:38.046 [2024-11-26 13:29:26.462119] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:38.046 [2024-11-26 13:29:26.470503] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:38.046 { 00:16:38.046 "ublk_device": "/dev/ublkb0", 00:16:38.046 "id": 0, 00:16:38.046 "queue_depth": 512, 00:16:38.046 "num_queues": 4, 00:16:38.046 "bdev_name": "Malloc0" 00:16:38.046 }, 00:16:38.046 { 00:16:38.046 "ublk_device": "/dev/ublkb1", 00:16:38.046 "id": 1, 00:16:38.046 "queue_depth": 512, 00:16:38.046 "num_queues": 4, 00:16:38.046 "bdev_name": "Malloc1" 00:16:38.046 }, 00:16:38.046 { 00:16:38.046 "ublk_device": "/dev/ublkb2", 00:16:38.046 "id": 2, 00:16:38.046 "queue_depth": 512, 00:16:38.046 "num_queues": 4, 00:16:38.046 "bdev_name": "Malloc2" 00:16:38.046 }, 00:16:38.046 { 00:16:38.046 "ublk_device": "/dev/ublkb3", 00:16:38.046 "id": 3, 00:16:38.046 "queue_depth": 512, 00:16:38.046 "num_queues": 4, 00:16:38.046 "bdev_name": "Malloc3" 00:16:38.046 } 00:16:38.046 ]' 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:38.046 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:38.307 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:38.568 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:38.568 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:38.568 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:38.568 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:38.568 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:38.568 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.568 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:38.568 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:38.568 13:29:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.568 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.568 [2024-11-26 13:29:27.093559] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:38.568 [2024-11-26 13:29:27.125161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:38.568 [2024-11-26 13:29:27.126402] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:38.568 [2024-11-26 13:29:27.132485] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:38.568 [2024-11-26 13:29:27.132790] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:38.568 [2024-11-26 13:29:27.132800] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.830 [2024-11-26 13:29:27.148528] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:38.830 [2024-11-26 13:29:27.184520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:38.830 [2024-11-26 13:29:27.185528] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:38.830 [2024-11-26 13:29:27.188740] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:38.830 [2024-11-26 13:29:27.189008] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:38.830 [2024-11-26 13:29:27.189017] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.830 [2024-11-26 13:29:27.207561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:38.830 [2024-11-26 13:29:27.247002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:38.830 [2024-11-26 13:29:27.248213] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:38.830 [2024-11-26 13:29:27.254491] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:38.830 [2024-11-26 13:29:27.254779] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:38.830 [2024-11-26 13:29:27.254793] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.830 [2024-11-26 13:29:27.270962] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:38.830 [2024-11-26 13:29:27.304076] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:38.830 [2024-11-26 13:29:27.305091] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:38.830 [2024-11-26 13:29:27.310478] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:38.830 [2024-11-26 13:29:27.310739] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:38.830 [2024-11-26 13:29:27.310748] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.830 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:39.091 [2024-11-26 13:29:27.502528] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:39.091 [2024-11-26 13:29:27.510461] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:39.091 [2024-11-26 13:29:27.510491] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:39.091 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:39.091 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.091 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:39.091 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.091 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.658 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.658 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.658 13:29:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:39.658 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.658 13:29:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.916 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.916 13:29:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.916 13:29:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:39.916 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.916 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:40.174 13:29:28 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:40.433 13:29:28 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:40.433 00:16:40.433 real 0m3.707s 00:16:40.433 user 0m0.796s 00:16:40.433 sys 0m0.121s 00:16:40.433 ************************************ 00:16:40.433 END TEST test_create_multi_ublk 00:16:40.433 ************************************ 00:16:40.433 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.433 13:29:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.433 13:29:28 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:40.433 13:29:28 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:40.433 13:29:28 ublk -- ublk/ublk.sh@130 -- # killprocess 73645 00:16:40.433 13:29:28 ublk -- common/autotest_common.sh@954 -- # '[' -z 73645 ']' 00:16:40.433 13:29:28 ublk -- common/autotest_common.sh@958 -- # kill -0 73645 00:16:40.433 13:29:28 ublk -- common/autotest_common.sh@959 -- # uname 00:16:40.433 13:29:28 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.433 13:29:28 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73645 00:16:40.433 13:29:28 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.433 killing process with pid 73645 00:16:40.433 13:29:28 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.433 13:29:28 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73645' 00:16:40.433 13:29:28 ublk -- common/autotest_common.sh@973 -- # kill 73645 00:16:40.433 13:29:28 ublk -- common/autotest_common.sh@978 -- # wait 73645 00:16:40.999 [2024-11-26 13:29:29.394998] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:40.999 [2024-11-26 13:29:29.395047] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:41.567 00:16:41.567 real 0m25.189s 00:16:41.567 user 0m35.633s 00:16:41.567 sys 0m10.191s 00:16:41.567 13:29:30 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.567 ************************************ 00:16:41.567 END TEST ublk 00:16:41.567 ************************************ 00:16:41.567 13:29:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.567 13:29:30 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:41.567 13:29:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:41.567 13:29:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.567 13:29:30 -- common/autotest_common.sh@10 -- # set +x 00:16:41.828 ************************************ 00:16:41.828 START TEST ublk_recovery 00:16:41.828 ************************************ 00:16:41.828 13:29:30 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:41.828 * Looking for test storage... 00:16:41.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:41.828 13:29:30 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:41.828 13:29:30 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:41.828 13:29:30 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:41.828 13:29:30 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.828 13:29:30 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:41.828 13:29:30 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.828 13:29:30 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:41.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.828 --rc genhtml_branch_coverage=1 00:16:41.828 --rc genhtml_function_coverage=1 00:16:41.828 --rc genhtml_legend=1 00:16:41.828 --rc geninfo_all_blocks=1 00:16:41.828 --rc geninfo_unexecuted_blocks=1 00:16:41.828 00:16:41.828 ' 00:16:41.828 13:29:30 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:41.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.828 --rc genhtml_branch_coverage=1 00:16:41.828 --rc genhtml_function_coverage=1 00:16:41.828 --rc genhtml_legend=1 00:16:41.828 --rc geninfo_all_blocks=1 00:16:41.828 --rc geninfo_unexecuted_blocks=1 00:16:41.828 00:16:41.828 ' 00:16:41.828 13:29:30 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:41.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.828 --rc genhtml_branch_coverage=1 00:16:41.828 --rc genhtml_function_coverage=1 00:16:41.828 --rc genhtml_legend=1 00:16:41.828 --rc geninfo_all_blocks=1 00:16:41.828 --rc geninfo_unexecuted_blocks=1 00:16:41.828 00:16:41.828 ' 00:16:41.828 13:29:30 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:41.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.828 --rc genhtml_branch_coverage=1 00:16:41.828 --rc genhtml_function_coverage=1 00:16:41.828 --rc genhtml_legend=1 00:16:41.828 --rc geninfo_all_blocks=1 00:16:41.828 --rc geninfo_unexecuted_blocks=1 00:16:41.828 00:16:41.828 ' 00:16:41.828 13:29:30 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:41.828 13:29:30 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:41.828 13:29:30 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:41.828 13:29:30 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:41.828 13:29:30 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:41.828 13:29:30 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:41.828 13:29:30 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:41.828 13:29:30 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:41.828 13:29:30 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:41.828 13:29:30 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:41.828 13:29:30 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74050 00:16:41.828 13:29:30 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:41.828 13:29:30 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74050 00:16:41.829 13:29:30 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:41.829 13:29:30 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74050 ']' 00:16:41.829 13:29:30 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.829 13:29:30 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.829 13:29:30 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.829 13:29:30 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.829 13:29:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.829 [2024-11-26 13:29:30.380017] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:41.829 [2024-11-26 13:29:30.380181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74050 ] 00:16:42.089 [2024-11-26 13:29:30.549980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:42.351 [2024-11-26 13:29:30.697418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.351 [2024-11-26 13:29:30.697513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.922 13:29:31 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.922 13:29:31 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:42.922 13:29:31 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:42.922 13:29:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.922 13:29:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.922 [2024-11-26 13:29:31.361468] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:42.922 [2024-11-26 13:29:31.363482] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:42.922 13:29:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.922 13:29:31 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:42.922 13:29:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.922 13:29:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.922 malloc0 00:16:42.922 13:29:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.922 13:29:31 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:42.922 13:29:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.922 13:29:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.922 [2024-11-26 13:29:31.473613] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:42.922 [2024-11-26 13:29:31.473720] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:42.922 [2024-11-26 13:29:31.473731] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:42.922 [2024-11-26 13:29:31.473741] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:42.922 [2024-11-26 13:29:31.482577] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:42.922 [2024-11-26 13:29:31.482599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.183 [2024-11-26 13:29:31.489475] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.183 [2024-11-26 13:29:31.489628] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:43.183 [2024-11-26 13:29:31.505477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.183 1 00:16:43.183 13:29:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.183 13:29:31 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:44.127 13:29:32 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74087 00:16:44.127 13:29:32 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:44.127 13:29:32 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:44.127 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:44.127 fio-3.35 00:16:44.127 Starting 1 process 00:16:49.403 13:29:37 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74050 00:16:49.403 13:29:37 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:54.686 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74050 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:54.686 13:29:42 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74193 00:16:54.686 13:29:42 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.686 13:29:42 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74193 00:16:54.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.686 13:29:42 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74193 ']' 00:16:54.686 13:29:42 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.686 13:29:42 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.686 13:29:42 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.686 13:29:42 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:54.686 13:29:42 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.686 13:29:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.686 [2024-11-26 13:29:42.602718] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:16:54.686 [2024-11-26 13:29:42.602835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74193 ] 00:16:54.686 [2024-11-26 13:29:42.760318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:54.686 [2024-11-26 13:29:42.837119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.686 [2024-11-26 13:29:42.837192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.947 13:29:43 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.948 13:29:43 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:54.948 13:29:43 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:54.948 13:29:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.948 13:29:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.948 [2024-11-26 13:29:43.388459] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:54.948 [2024-11-26 13:29:43.389962] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:54.948 13:29:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.948 13:29:43 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:54.948 13:29:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.948 13:29:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.948 malloc0 00:16:54.948 13:29:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.948 13:29:43 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:54.948 13:29:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.948 13:29:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:54.948 [2024-11-26 13:29:43.468642] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:54.948 [2024-11-26 13:29:43.468674] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:54.948 [2024-11-26 13:29:43.468682] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:54.948 [2024-11-26 13:29:43.476481] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:54.948 [2024-11-26 13:29:43.476504] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:54.948 1 00:16:54.948 13:29:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.948 13:29:43 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74087 00:16:56.334 [2024-11-26 13:29:44.476527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:56.334 [2024-11-26 13:29:44.484466] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:56.334 [2024-11-26 13:29:44.484484] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:57.277 [2024-11-26 13:29:45.484511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:57.277 [2024-11-26 13:29:45.488463] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:57.277 [2024-11-26 13:29:45.488479] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:58.220 [2024-11-26 13:29:46.488498] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:58.220 [2024-11-26 13:29:46.496461] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:58.220 [2024-11-26 13:29:46.496479] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:58.220 [2024-11-26 13:29:46.496488] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:58.220 [2024-11-26 13:29:46.496554] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:20.174 [2024-11-26 13:30:07.760474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:20.174 [2024-11-26 13:30:07.764435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:20.174 [2024-11-26 13:30:07.774665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:20.174 [2024-11-26 13:30:07.774684] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:46.718 00:17:46.718 fio_test: (groupid=0, jobs=1): err= 0: pid=74090: Tue Nov 26 13:30:32 2024 00:17:46.718 read: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(3429MiB/60002msec) 00:17:46.718 slat (nsec): min=858, max=121677, avg=4901.57, stdev=1385.27 00:17:46.718 clat (usec): min=937, max=30266k, avg=4711.78, stdev=279724.88 00:17:46.718 lat (usec): min=942, max=30266k, avg=4716.68, stdev=279724.88 00:17:46.718 clat percentiles (usec): 00:17:46.718 | 1.00th=[ 1745], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1926], 00:17:46.718 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 2008], 00:17:46.718 | 70.00th=[ 2024], 80.00th=[ 2057], 90.00th=[ 2376], 95.00th=[ 3032], 00:17:46.718 | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 7832], 99.95th=[12911], 00:17:46.718 | 99.99th=[13566] 00:17:46.718 bw ( KiB/s): min=56000, max=132056, per=100.00%, avg=117198.02, stdev=14877.41, samples=59 00:17:46.718 iops : min=14000, max=33014, avg=29299.46, stdev=3719.34, samples=59 00:17:46.718 write: IOPS=14.6k, BW=57.0MiB/s (59.8MB/s)(3423MiB/60002msec); 0 zone resets 00:17:46.718 slat (nsec): min=952, max=155004, avg=4942.31, stdev=1412.47 00:17:46.718 clat (usec): min=523, max=30266k, avg=4033.79, stdev=235341.54 00:17:46.718 lat (usec): min=534, max=30266k, avg=4038.74, stdev=235341.54 00:17:46.718 clat percentiles (usec): 00:17:46.718 | 1.00th=[ 1795], 5.00th=[ 1909], 10.00th=[ 1942], 20.00th=[ 2008], 00:17:46.718 | 30.00th=[ 2040], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:17:46.718 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2474], 95.00th=[ 2966], 00:17:46.718 | 99.00th=[ 5145], 99.50th=[ 5800], 99.90th=[ 7832], 99.95th=[12911], 00:17:46.718 | 99.99th=[13566] 00:17:46.718 bw ( KiB/s): min=56784, max=130432, per=100.00%, avg=117032.02, stdev=14708.30, samples=59 00:17:46.718 iops : min=14196, max=32608, avg=29258.00, stdev=3677.07, samples=59 00:17:46.718 lat (usec) : 750=0.01%, 1000=0.01% 00:17:46.718 lat (msec) : 2=37.61%, 4=59.70%, 10=2.63%, 20=0.05%, >=2000=0.01% 00:17:46.718 cpu : usr=3.13%, sys=14.69%, ctx=57594, majf=0, minf=14 00:17:46.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:46.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:46.718 issued rwts: total=877751,876324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:46.718 00:17:46.718 Run status group 0 (all jobs): 00:17:46.718 READ: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=3429MiB (3595MB), run=60002-60002msec 00:17:46.718 WRITE: bw=57.0MiB/s (59.8MB/s), 57.0MiB/s-57.0MiB/s (59.8MB/s-59.8MB/s), io=3423MiB (3589MB), run=60002-60002msec 00:17:46.718 00:17:46.718 Disk stats (read/write): 00:17:46.718 ublkb1: ios=875039/873621, merge=0/0, ticks=4086312/3413949, in_queue=7500262, util=99.92% 00:17:46.718 13:30:32 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.718 [2024-11-26 13:30:32.768587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:46.718 [2024-11-26 13:30:32.800573] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:46.718 [2024-11-26 13:30:32.800715] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:46.718 [2024-11-26 13:30:32.807222] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:46.718 [2024-11-26 13:30:32.807330] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:46.718 [2024-11-26 13:30:32.807337] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.718 13:30:32 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.718 [2024-11-26 13:30:32.820573] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:46.718 [2024-11-26 13:30:32.828456] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:46.718 [2024-11-26 13:30:32.828489] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.718 13:30:32 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:46.718 13:30:32 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:46.718 13:30:32 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74193 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74193 ']' 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74193 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74193 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:46.718 killing process with pid 74193 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74193' 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74193 00:17:46.718 13:30:32 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74193 00:17:46.718 [2024-11-26 13:30:33.891372] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:46.718 [2024-11-26 13:30:33.891422] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:46.718 00:17:46.718 real 1m4.467s 00:17:46.718 user 1m49.026s 00:17:46.718 sys 0m19.792s 00:17:46.718 13:30:34 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.718 13:30:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.718 ************************************ 00:17:46.718 END TEST ublk_recovery 00:17:46.718 ************************************ 00:17:46.718 13:30:34 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:46.718 13:30:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:46.718 13:30:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.718 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:17:46.718 13:30:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:46.718 13:30:34 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:46.718 13:30:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:46.718 13:30:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.718 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:17:46.718 ************************************ 00:17:46.718 START TEST ftl 00:17:46.718 ************************************ 00:17:46.718 13:30:34 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:46.718 * Looking for test storage... 00:17:46.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:46.718 13:30:34 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:46.718 13:30:34 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:46.718 13:30:34 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:46.718 13:30:34 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:46.718 13:30:34 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.718 13:30:34 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.718 13:30:34 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.718 13:30:34 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.718 13:30:34 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.719 13:30:34 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.719 13:30:34 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.719 13:30:34 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.719 13:30:34 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.719 13:30:34 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.719 13:30:34 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.719 13:30:34 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:46.719 13:30:34 ftl -- scripts/common.sh@345 -- # : 1 00:17:46.719 13:30:34 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.719 13:30:34 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.719 13:30:34 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:46.719 13:30:34 ftl -- scripts/common.sh@353 -- # local d=1 00:17:46.719 13:30:34 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.719 13:30:34 ftl -- scripts/common.sh@355 -- # echo 1 00:17:46.719 13:30:34 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.719 13:30:34 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:46.719 13:30:34 ftl -- scripts/common.sh@353 -- # local d=2 00:17:46.719 13:30:34 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.719 13:30:34 ftl -- scripts/common.sh@355 -- # echo 2 00:17:46.719 13:30:34 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.719 13:30:34 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.719 13:30:34 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.719 13:30:34 ftl -- scripts/common.sh@368 -- # return 0 00:17:46.719 13:30:34 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.719 13:30:34 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:46.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.719 --rc genhtml_branch_coverage=1 00:17:46.719 --rc genhtml_function_coverage=1 00:17:46.719 --rc genhtml_legend=1 00:17:46.719 --rc geninfo_all_blocks=1 00:17:46.719 --rc geninfo_unexecuted_blocks=1 00:17:46.719 00:17:46.719 ' 00:17:46.719 13:30:34 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:46.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.719 --rc genhtml_branch_coverage=1 00:17:46.719 --rc genhtml_function_coverage=1 00:17:46.719 --rc genhtml_legend=1 00:17:46.719 --rc geninfo_all_blocks=1 00:17:46.719 --rc geninfo_unexecuted_blocks=1 00:17:46.719 00:17:46.719 ' 00:17:46.719 13:30:34 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:46.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.719 --rc genhtml_branch_coverage=1 00:17:46.719 --rc genhtml_function_coverage=1 00:17:46.719 --rc genhtml_legend=1 00:17:46.719 --rc geninfo_all_blocks=1 00:17:46.719 --rc geninfo_unexecuted_blocks=1 00:17:46.719 00:17:46.719 ' 00:17:46.719 13:30:34 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:46.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.719 --rc genhtml_branch_coverage=1 00:17:46.719 --rc genhtml_function_coverage=1 00:17:46.719 --rc genhtml_legend=1 00:17:46.719 --rc geninfo_all_blocks=1 00:17:46.719 --rc geninfo_unexecuted_blocks=1 00:17:46.719 00:17:46.719 ' 00:17:46.719 13:30:34 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:46.719 13:30:34 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:46.719 13:30:34 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:46.719 13:30:34 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:46.719 13:30:34 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:46.719 13:30:34 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:46.719 13:30:34 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.719 13:30:34 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:46.719 13:30:34 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:46.719 13:30:34 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.719 13:30:34 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.719 13:30:34 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:46.719 13:30:34 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:46.719 13:30:34 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:46.719 13:30:34 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:46.719 13:30:34 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:46.719 13:30:34 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:46.719 13:30:34 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.719 13:30:34 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.719 13:30:34 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:46.719 13:30:34 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:46.719 13:30:34 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:46.719 13:30:34 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:46.719 13:30:34 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:46.719 13:30:34 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:46.719 13:30:34 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:46.719 13:30:34 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:46.719 13:30:34 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:46.719 13:30:34 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:46.719 13:30:34 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.719 13:30:34 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:46.719 13:30:34 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:46.719 13:30:34 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:46.719 13:30:34 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:46.719 13:30:34 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:46.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:46.979 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:46.979 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:46.979 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:46.979 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:46.979 13:30:35 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74992 00:17:46.979 13:30:35 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74992 00:17:46.979 13:30:35 ftl -- common/autotest_common.sh@835 -- # '[' -z 74992 ']' 00:17:46.979 13:30:35 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.979 13:30:35 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.979 13:30:35 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.979 13:30:35 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.979 13:30:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:46.979 13:30:35 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:46.979 [2024-11-26 13:30:35.432703] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:17:46.979 [2024-11-26 13:30:35.432823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74992 ] 00:17:47.239 [2024-11-26 13:30:35.592366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.239 [2024-11-26 13:30:35.694626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.810 13:30:36 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.810 13:30:36 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:47.810 13:30:36 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:48.071 13:30:36 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:48.641 13:30:37 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:48.641 13:30:37 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@50 -- # break 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:49.212 13:30:37 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:49.470 13:30:37 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:49.470 13:30:37 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:49.470 13:30:37 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:49.470 13:30:37 ftl -- ftl/ftl.sh@63 -- # break 00:17:49.470 13:30:37 ftl -- ftl/ftl.sh@66 -- # killprocess 74992 00:17:49.470 13:30:37 ftl -- common/autotest_common.sh@954 -- # '[' -z 74992 ']' 00:17:49.470 13:30:37 ftl -- common/autotest_common.sh@958 -- # kill -0 74992 00:17:49.470 13:30:37 ftl -- common/autotest_common.sh@959 -- # uname 00:17:49.470 13:30:37 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.470 13:30:37 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74992 00:17:49.470 13:30:37 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.470 13:30:37 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.470 killing process with pid 74992 00:17:49.470 13:30:37 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74992' 00:17:49.470 13:30:37 ftl -- common/autotest_common.sh@973 -- # kill 74992 00:17:49.470 13:30:37 ftl -- common/autotest_common.sh@978 -- # wait 74992 00:17:50.852 13:30:39 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:50.852 13:30:39 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:50.852 13:30:39 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:50.852 13:30:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.852 13:30:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:50.852 ************************************ 00:17:50.852 START TEST ftl_fio_basic 00:17:50.852 ************************************ 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:50.852 * Looking for test storage... 00:17:50.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.852 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.853 --rc genhtml_branch_coverage=1 00:17:50.853 --rc genhtml_function_coverage=1 00:17:50.853 --rc genhtml_legend=1 00:17:50.853 --rc geninfo_all_blocks=1 00:17:50.853 --rc geninfo_unexecuted_blocks=1 00:17:50.853 00:17:50.853 ' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.853 --rc genhtml_branch_coverage=1 00:17:50.853 --rc genhtml_function_coverage=1 00:17:50.853 --rc genhtml_legend=1 00:17:50.853 --rc geninfo_all_blocks=1 00:17:50.853 --rc geninfo_unexecuted_blocks=1 00:17:50.853 00:17:50.853 ' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.853 --rc genhtml_branch_coverage=1 00:17:50.853 --rc genhtml_function_coverage=1 00:17:50.853 --rc genhtml_legend=1 00:17:50.853 --rc geninfo_all_blocks=1 00:17:50.853 --rc geninfo_unexecuted_blocks=1 00:17:50.853 00:17:50.853 ' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.853 --rc genhtml_branch_coverage=1 00:17:50.853 --rc genhtml_function_coverage=1 00:17:50.853 --rc genhtml_legend=1 00:17:50.853 --rc geninfo_all_blocks=1 00:17:50.853 --rc geninfo_unexecuted_blocks=1 00:17:50.853 00:17:50.853 ' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75124 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75124 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75124 ']' 00:17:50.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:50.853 13:30:39 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:50.853 [2024-11-26 13:30:39.354156] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:17:50.853 [2024-11-26 13:30:39.354275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75124 ] 00:17:51.112 [2024-11-26 13:30:39.511591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:51.112 [2024-11-26 13:30:39.591133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.112 [2024-11-26 13:30:39.591429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.112 [2024-11-26 13:30:39.591437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.683 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.683 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:51.683 13:30:40 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:51.683 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:51.683 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:51.683 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:51.683 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:51.683 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:51.943 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:51.943 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:51.943 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:51.943 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:51.943 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:51.944 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:51.944 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:51.944 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:52.205 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:52.205 { 00:17:52.205 "name": "nvme0n1", 00:17:52.205 "aliases": [ 00:17:52.205 "cacc7d64-43b1-47b3-96dd-77e494bed886" 00:17:52.205 ], 00:17:52.205 "product_name": "NVMe disk", 00:17:52.205 "block_size": 4096, 00:17:52.205 "num_blocks": 1310720, 00:17:52.205 "uuid": "cacc7d64-43b1-47b3-96dd-77e494bed886", 00:17:52.205 "numa_id": -1, 00:17:52.205 "assigned_rate_limits": { 00:17:52.205 "rw_ios_per_sec": 0, 00:17:52.205 "rw_mbytes_per_sec": 0, 00:17:52.205 "r_mbytes_per_sec": 0, 00:17:52.205 "w_mbytes_per_sec": 0 00:17:52.205 }, 00:17:52.205 "claimed": false, 00:17:52.205 "zoned": false, 00:17:52.205 "supported_io_types": { 00:17:52.205 "read": true, 00:17:52.205 "write": true, 00:17:52.205 "unmap": true, 00:17:52.205 "flush": true, 00:17:52.205 "reset": true, 00:17:52.205 "nvme_admin": true, 00:17:52.205 "nvme_io": true, 00:17:52.205 "nvme_io_md": false, 00:17:52.205 "write_zeroes": true, 00:17:52.205 "zcopy": false, 00:17:52.205 "get_zone_info": false, 00:17:52.205 "zone_management": false, 00:17:52.205 "zone_append": false, 00:17:52.205 "compare": true, 00:17:52.205 "compare_and_write": false, 00:17:52.205 "abort": true, 00:17:52.205 "seek_hole": false, 00:17:52.205 "seek_data": false, 00:17:52.205 "copy": true, 00:17:52.205 "nvme_iov_md": false 00:17:52.205 }, 00:17:52.205 "driver_specific": { 00:17:52.205 "nvme": [ 00:17:52.205 { 00:17:52.205 "pci_address": "0000:00:11.0", 00:17:52.205 "trid": { 00:17:52.205 "trtype": "PCIe", 00:17:52.205 "traddr": "0000:00:11.0" 00:17:52.205 }, 00:17:52.205 "ctrlr_data": { 00:17:52.205 "cntlid": 0, 00:17:52.205 "vendor_id": "0x1b36", 00:17:52.206 "model_number": "QEMU NVMe Ctrl", 00:17:52.206 "serial_number": "12341", 00:17:52.206 "firmware_revision": "8.0.0", 00:17:52.206 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:52.206 "oacs": { 00:17:52.206 "security": 0, 00:17:52.206 "format": 1, 00:17:52.206 "firmware": 0, 00:17:52.206 "ns_manage": 1 00:17:52.206 }, 00:17:52.206 "multi_ctrlr": false, 00:17:52.206 "ana_reporting": false 00:17:52.206 }, 00:17:52.206 "vs": { 00:17:52.206 "nvme_version": "1.4" 00:17:52.206 }, 00:17:52.206 "ns_data": { 00:17:52.206 "id": 1, 00:17:52.206 "can_share": false 00:17:52.206 } 00:17:52.206 } 00:17:52.206 ], 00:17:52.206 "mp_policy": "active_passive" 00:17:52.206 } 00:17:52.206 } 00:17:52.206 ]' 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:52.206 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:52.467 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:52.467 13:30:40 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:52.728 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=f1ce474d-dc0a-459f-852f-29a83cdd5754 00:17:52.728 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f1ce474d-dc0a-459f-852f-29a83cdd5754 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:52.989 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:52.989 { 00:17:52.989 "name": "6cc14fd7-5453-496d-99b5-1a3d0a9c13ce", 00:17:52.989 "aliases": [ 00:17:52.989 "lvs/nvme0n1p0" 00:17:52.989 ], 00:17:52.989 "product_name": "Logical Volume", 00:17:52.989 "block_size": 4096, 00:17:52.989 "num_blocks": 26476544, 00:17:52.989 "uuid": "6cc14fd7-5453-496d-99b5-1a3d0a9c13ce", 00:17:52.989 "assigned_rate_limits": { 00:17:52.990 "rw_ios_per_sec": 0, 00:17:52.990 "rw_mbytes_per_sec": 0, 00:17:52.990 "r_mbytes_per_sec": 0, 00:17:52.990 "w_mbytes_per_sec": 0 00:17:52.990 }, 00:17:52.990 "claimed": false, 00:17:52.990 "zoned": false, 00:17:52.990 "supported_io_types": { 00:17:52.990 "read": true, 00:17:52.990 "write": true, 00:17:52.990 "unmap": true, 00:17:52.990 "flush": false, 00:17:52.990 "reset": true, 00:17:52.990 "nvme_admin": false, 00:17:52.990 "nvme_io": false, 00:17:52.990 "nvme_io_md": false, 00:17:52.990 "write_zeroes": true, 00:17:52.990 "zcopy": false, 00:17:52.990 "get_zone_info": false, 00:17:52.990 "zone_management": false, 00:17:52.990 "zone_append": false, 00:17:52.990 "compare": false, 00:17:52.990 "compare_and_write": false, 00:17:52.990 "abort": false, 00:17:52.990 "seek_hole": true, 00:17:52.990 "seek_data": true, 00:17:52.990 "copy": false, 00:17:52.990 "nvme_iov_md": false 00:17:52.990 }, 00:17:52.990 "driver_specific": { 00:17:52.990 "lvol": { 00:17:52.990 "lvol_store_uuid": "f1ce474d-dc0a-459f-852f-29a83cdd5754", 00:17:52.990 "base_bdev": "nvme0n1", 00:17:52.990 "thin_provision": true, 00:17:52.990 "num_allocated_clusters": 0, 00:17:52.990 "snapshot": false, 00:17:52.990 "clone": false, 00:17:52.990 "esnap_clone": false 00:17:52.990 } 00:17:52.990 } 00:17:52.990 } 00:17:52.990 ]' 00:17:52.990 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:52.990 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:52.990 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:53.252 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:53.252 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:53.252 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:53.252 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:53.252 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:53.252 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:53.513 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:53.513 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:53.513 13:30:41 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:53.513 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:53.513 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:53.513 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:53.513 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:53.513 13:30:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:53.513 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:53.513 { 00:17:53.513 "name": "6cc14fd7-5453-496d-99b5-1a3d0a9c13ce", 00:17:53.513 "aliases": [ 00:17:53.513 "lvs/nvme0n1p0" 00:17:53.513 ], 00:17:53.513 "product_name": "Logical Volume", 00:17:53.513 "block_size": 4096, 00:17:53.513 "num_blocks": 26476544, 00:17:53.513 "uuid": "6cc14fd7-5453-496d-99b5-1a3d0a9c13ce", 00:17:53.513 "assigned_rate_limits": { 00:17:53.513 "rw_ios_per_sec": 0, 00:17:53.513 "rw_mbytes_per_sec": 0, 00:17:53.513 "r_mbytes_per_sec": 0, 00:17:53.513 "w_mbytes_per_sec": 0 00:17:53.513 }, 00:17:53.513 "claimed": false, 00:17:53.513 "zoned": false, 00:17:53.513 "supported_io_types": { 00:17:53.513 "read": true, 00:17:53.513 "write": true, 00:17:53.513 "unmap": true, 00:17:53.513 "flush": false, 00:17:53.513 "reset": true, 00:17:53.513 "nvme_admin": false, 00:17:53.513 "nvme_io": false, 00:17:53.513 "nvme_io_md": false, 00:17:53.513 "write_zeroes": true, 00:17:53.513 "zcopy": false, 00:17:53.513 "get_zone_info": false, 00:17:53.513 "zone_management": false, 00:17:53.513 "zone_append": false, 00:17:53.513 "compare": false, 00:17:53.513 "compare_and_write": false, 00:17:53.513 "abort": false, 00:17:53.513 "seek_hole": true, 00:17:53.513 "seek_data": true, 00:17:53.513 "copy": false, 00:17:53.513 "nvme_iov_md": false 00:17:53.513 }, 00:17:53.513 "driver_specific": { 00:17:53.513 "lvol": { 00:17:53.513 "lvol_store_uuid": "f1ce474d-dc0a-459f-852f-29a83cdd5754", 00:17:53.513 "base_bdev": "nvme0n1", 00:17:53.513 "thin_provision": true, 00:17:53.513 "num_allocated_clusters": 0, 00:17:53.513 "snapshot": false, 00:17:53.513 "clone": false, 00:17:53.513 "esnap_clone": false 00:17:53.513 } 00:17:53.513 } 00:17:53.513 } 00:17:53.513 ]' 00:17:53.513 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:53.513 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:53.513 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:53.775 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:53.775 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6cc14fd7-5453-496d-99b5-1a3d0a9c13ce 00:17:54.036 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.036 { 00:17:54.036 "name": "6cc14fd7-5453-496d-99b5-1a3d0a9c13ce", 00:17:54.036 "aliases": [ 00:17:54.036 "lvs/nvme0n1p0" 00:17:54.036 ], 00:17:54.036 "product_name": "Logical Volume", 00:17:54.036 "block_size": 4096, 00:17:54.036 "num_blocks": 26476544, 00:17:54.036 "uuid": "6cc14fd7-5453-496d-99b5-1a3d0a9c13ce", 00:17:54.036 "assigned_rate_limits": { 00:17:54.036 "rw_ios_per_sec": 0, 00:17:54.036 "rw_mbytes_per_sec": 0, 00:17:54.036 "r_mbytes_per_sec": 0, 00:17:54.036 "w_mbytes_per_sec": 0 00:17:54.036 }, 00:17:54.036 "claimed": false, 00:17:54.036 "zoned": false, 00:17:54.036 "supported_io_types": { 00:17:54.036 "read": true, 00:17:54.036 "write": true, 00:17:54.036 "unmap": true, 00:17:54.036 "flush": false, 00:17:54.036 "reset": true, 00:17:54.036 "nvme_admin": false, 00:17:54.036 "nvme_io": false, 00:17:54.036 "nvme_io_md": false, 00:17:54.036 "write_zeroes": true, 00:17:54.036 "zcopy": false, 00:17:54.036 "get_zone_info": false, 00:17:54.036 "zone_management": false, 00:17:54.036 "zone_append": false, 00:17:54.036 "compare": false, 00:17:54.036 "compare_and_write": false, 00:17:54.036 "abort": false, 00:17:54.036 "seek_hole": true, 00:17:54.036 "seek_data": true, 00:17:54.036 "copy": false, 00:17:54.036 "nvme_iov_md": false 00:17:54.036 }, 00:17:54.036 "driver_specific": { 00:17:54.036 "lvol": { 00:17:54.036 "lvol_store_uuid": "f1ce474d-dc0a-459f-852f-29a83cdd5754", 00:17:54.036 "base_bdev": "nvme0n1", 00:17:54.036 "thin_provision": true, 00:17:54.036 "num_allocated_clusters": 0, 00:17:54.036 "snapshot": false, 00:17:54.036 "clone": false, 00:17:54.036 "esnap_clone": false 00:17:54.036 } 00:17:54.036 } 00:17:54.036 } 00:17:54.036 ]' 00:17:54.036 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.036 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.036 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.036 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:54.036 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:54.036 13:30:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:54.036 13:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:54.036 13:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:54.036 13:30:42 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6cc14fd7-5453-496d-99b5-1a3d0a9c13ce -c nvc0n1p0 --l2p_dram_limit 60 00:17:54.298 [2024-11-26 13:30:42.740355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.298 [2024-11-26 13:30:42.740394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.298 [2024-11-26 13:30:42.740407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.298 [2024-11-26 13:30:42.740413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.298 [2024-11-26 13:30:42.740473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.298 [2024-11-26 13:30:42.740482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.298 [2024-11-26 13:30:42.740489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:54.298 [2024-11-26 13:30:42.740495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.298 [2024-11-26 13:30:42.740532] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.298 [2024-11-26 13:30:42.741112] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.298 [2024-11-26 13:30:42.741136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.298 [2024-11-26 13:30:42.741143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.298 [2024-11-26 13:30:42.741151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:17:54.298 [2024-11-26 13:30:42.741158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.298 [2024-11-26 13:30:42.741218] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 11adbe0d-52d5-4e2d-b737-b10f347c7e6f 00:17:54.298 [2024-11-26 13:30:42.742270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.298 [2024-11-26 13:30:42.742300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:54.298 [2024-11-26 13:30:42.742308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:54.298 [2024-11-26 13:30:42.742315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.298 [2024-11-26 13:30:42.747596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.298 [2024-11-26 13:30:42.747627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.298 [2024-11-26 13:30:42.747635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.222 ms 00:17:54.298 [2024-11-26 13:30:42.747646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.298 [2024-11-26 13:30:42.747728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.298 [2024-11-26 13:30:42.747737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.298 [2024-11-26 13:30:42.747743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:54.298 [2024-11-26 13:30:42.747753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.298 [2024-11-26 13:30:42.747794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.298 [2024-11-26 13:30:42.747804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.298 [2024-11-26 13:30:42.747810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:54.298 [2024-11-26 13:30:42.747817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.298 [2024-11-26 13:30:42.747845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.298 [2024-11-26 13:30:42.750755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.299 [2024-11-26 13:30:42.750781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.299 [2024-11-26 13:30:42.750794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.914 ms 00:17:54.299 [2024-11-26 13:30:42.750800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.299 [2024-11-26 13:30:42.750838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.299 [2024-11-26 13:30:42.750845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.299 [2024-11-26 13:30:42.750853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.299 [2024-11-26 13:30:42.750859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.299 [2024-11-26 13:30:42.750881] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:54.299 [2024-11-26 13:30:42.750995] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.299 [2024-11-26 13:30:42.751011] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.299 [2024-11-26 13:30:42.751020] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:54.299 [2024-11-26 13:30:42.751030] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751037] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751044] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:54.299 [2024-11-26 13:30:42.751050] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.299 [2024-11-26 13:30:42.751057] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.299 [2024-11-26 13:30:42.751063] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.299 [2024-11-26 13:30:42.751072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.299 [2024-11-26 13:30:42.751077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.299 [2024-11-26 13:30:42.751086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:17:54.299 [2024-11-26 13:30:42.751091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.299 [2024-11-26 13:30:42.751168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.299 [2024-11-26 13:30:42.751175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.299 [2024-11-26 13:30:42.751182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:54.299 [2024-11-26 13:30:42.751187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.299 [2024-11-26 13:30:42.751294] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.299 [2024-11-26 13:30:42.751309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.299 [2024-11-26 13:30:42.751317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.299 [2024-11-26 13:30:42.751335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.299 [2024-11-26 13:30:42.751354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.299 [2024-11-26 13:30:42.751366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.299 [2024-11-26 13:30:42.751371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:54.299 [2024-11-26 13:30:42.751378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.299 [2024-11-26 13:30:42.751383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.299 [2024-11-26 13:30:42.751389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:54.299 [2024-11-26 13:30:42.751394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.299 [2024-11-26 13:30:42.751409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.299 [2024-11-26 13:30:42.751427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.299 [2024-11-26 13:30:42.751454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.299 [2024-11-26 13:30:42.751472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.299 [2024-11-26 13:30:42.751488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.299 [2024-11-26 13:30:42.751508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.299 [2024-11-26 13:30:42.751530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.299 [2024-11-26 13:30:42.751535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:54.299 [2024-11-26 13:30:42.751541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.299 [2024-11-26 13:30:42.751546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.299 [2024-11-26 13:30:42.751552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:54.299 [2024-11-26 13:30:42.751557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.299 [2024-11-26 13:30:42.751568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:54.299 [2024-11-26 13:30:42.751575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751582] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.299 [2024-11-26 13:30:42.751589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.299 [2024-11-26 13:30:42.751594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.299 [2024-11-26 13:30:42.751606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.299 [2024-11-26 13:30:42.751614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.299 [2024-11-26 13:30:42.751620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.299 [2024-11-26 13:30:42.751627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.299 [2024-11-26 13:30:42.751632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.299 [2024-11-26 13:30:42.751639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.299 [2024-11-26 13:30:42.751646] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.299 [2024-11-26 13:30:42.751654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.299 [2024-11-26 13:30:42.751661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:54.299 [2024-11-26 13:30:42.751668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:54.299 [2024-11-26 13:30:42.751674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:54.299 [2024-11-26 13:30:42.751680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:54.299 [2024-11-26 13:30:42.751686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:54.299 [2024-11-26 13:30:42.751693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:54.299 [2024-11-26 13:30:42.751698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:54.299 [2024-11-26 13:30:42.751704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:54.299 [2024-11-26 13:30:42.751709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:54.299 [2024-11-26 13:30:42.751718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:54.299 [2024-11-26 13:30:42.751723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:54.299 [2024-11-26 13:30:42.751730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:54.299 [2024-11-26 13:30:42.751736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:54.299 [2024-11-26 13:30:42.751742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:54.299 [2024-11-26 13:30:42.751748] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.299 [2024-11-26 13:30:42.751755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.299 [2024-11-26 13:30:42.751762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.299 [2024-11-26 13:30:42.751769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.300 [2024-11-26 13:30:42.751774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.300 [2024-11-26 13:30:42.751781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.300 [2024-11-26 13:30:42.751790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.300 [2024-11-26 13:30:42.751797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.300 [2024-11-26 13:30:42.751803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:17:54.300 [2024-11-26 13:30:42.751810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.300 [2024-11-26 13:30:42.751878] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:54.300 [2024-11-26 13:30:42.751889] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:56.836 [2024-11-26 13:30:45.126200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.126264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:56.836 [2024-11-26 13:30:45.126279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2374.311 ms 00:17:56.836 [2024-11-26 13:30:45.126290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.153786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.153834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.836 [2024-11-26 13:30:45.153848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.194 ms 00:17:56.836 [2024-11-26 13:30:45.153859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.154002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.154015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:56.836 [2024-11-26 13:30:45.154025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:56.836 [2024-11-26 13:30:45.154037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.202557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.202651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.836 [2024-11-26 13:30:45.202683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.467 ms 00:17:56.836 [2024-11-26 13:30:45.202712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.202811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.202840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.836 [2024-11-26 13:30:45.202862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:56.836 [2024-11-26 13:30:45.202886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.203609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.203672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.836 [2024-11-26 13:30:45.203702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:17:56.836 [2024-11-26 13:30:45.203726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.204043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.204090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.836 [2024-11-26 13:30:45.204112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:17:56.836 [2024-11-26 13:30:45.204141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.221696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.221727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.836 [2024-11-26 13:30:45.221738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.495 ms 00:17:56.836 [2024-11-26 13:30:45.221747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.233950] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:56.836 [2024-11-26 13:30:45.251100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.251129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:56.836 [2024-11-26 13:30:45.251144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.262 ms 00:17:56.836 [2024-11-26 13:30:45.251152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.299109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.299151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:56.836 [2024-11-26 13:30:45.299167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.913 ms 00:17:56.836 [2024-11-26 13:30:45.299176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.299362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.299379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:56.836 [2024-11-26 13:30:45.299392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:17:56.836 [2024-11-26 13:30:45.299400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.322124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.322156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:56.836 [2024-11-26 13:30:45.322168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.655 ms 00:17:56.836 [2024-11-26 13:30:45.322176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.344425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.344462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:56.836 [2024-11-26 13:30:45.344475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.202 ms 00:17:56.836 [2024-11-26 13:30:45.344483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.836 [2024-11-26 13:30:45.345080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.836 [2024-11-26 13:30:45.345102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:56.836 [2024-11-26 13:30:45.345113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:17:56.836 [2024-11-26 13:30:45.345121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.097 [2024-11-26 13:30:45.409646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.097 [2024-11-26 13:30:45.409678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:57.097 [2024-11-26 13:30:45.409696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.482 ms 00:17:57.097 [2024-11-26 13:30:45.409704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.097 [2024-11-26 13:30:45.434215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.098 [2024-11-26 13:30:45.434247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:57.098 [2024-11-26 13:30:45.434260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.420 ms 00:17:57.098 [2024-11-26 13:30:45.434268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.098 [2024-11-26 13:30:45.458791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.098 [2024-11-26 13:30:45.458823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:57.098 [2024-11-26 13:30:45.458835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.479 ms 00:17:57.098 [2024-11-26 13:30:45.458842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.098 [2024-11-26 13:30:45.482399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.098 [2024-11-26 13:30:45.482451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:57.098 [2024-11-26 13:30:45.482465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.517 ms 00:17:57.098 [2024-11-26 13:30:45.482472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.098 [2024-11-26 13:30:45.482519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.098 [2024-11-26 13:30:45.482528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:57.098 [2024-11-26 13:30:45.482543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:57.098 [2024-11-26 13:30:45.482550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.098 [2024-11-26 13:30:45.482646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.098 [2024-11-26 13:30:45.482656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:57.098 [2024-11-26 13:30:45.482666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:57.098 [2024-11-26 13:30:45.482673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.098 [2024-11-26 13:30:45.483795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2742.961 ms, result 0 00:17:57.098 { 00:17:57.098 "name": "ftl0", 00:17:57.098 "uuid": "11adbe0d-52d5-4e2d-b737-b10f347c7e6f" 00:17:57.098 } 00:17:57.098 13:30:45 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:57.098 13:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:57.098 13:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:57.098 13:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:57.098 13:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:57.098 13:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:57.098 13:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:57.357 13:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:57.357 [ 00:17:57.357 { 00:17:57.357 "name": "ftl0", 00:17:57.357 "aliases": [ 00:17:57.357 "11adbe0d-52d5-4e2d-b737-b10f347c7e6f" 00:17:57.357 ], 00:17:57.357 "product_name": "FTL disk", 00:17:57.357 "block_size": 4096, 00:17:57.357 "num_blocks": 20971520, 00:17:57.357 "uuid": "11adbe0d-52d5-4e2d-b737-b10f347c7e6f", 00:17:57.357 "assigned_rate_limits": { 00:17:57.357 "rw_ios_per_sec": 0, 00:17:57.357 "rw_mbytes_per_sec": 0, 00:17:57.357 "r_mbytes_per_sec": 0, 00:17:57.357 "w_mbytes_per_sec": 0 00:17:57.357 }, 00:17:57.357 "claimed": false, 00:17:57.357 "zoned": false, 00:17:57.357 "supported_io_types": { 00:17:57.357 "read": true, 00:17:57.357 "write": true, 00:17:57.357 "unmap": true, 00:17:57.357 "flush": true, 00:17:57.357 "reset": false, 00:17:57.357 "nvme_admin": false, 00:17:57.357 "nvme_io": false, 00:17:57.357 "nvme_io_md": false, 00:17:57.357 "write_zeroes": true, 00:17:57.357 "zcopy": false, 00:17:57.357 "get_zone_info": false, 00:17:57.357 "zone_management": false, 00:17:57.357 "zone_append": false, 00:17:57.357 "compare": false, 00:17:57.357 "compare_and_write": false, 00:17:57.357 "abort": false, 00:17:57.357 "seek_hole": false, 00:17:57.357 "seek_data": false, 00:17:57.357 "copy": false, 00:17:57.357 "nvme_iov_md": false 00:17:57.357 }, 00:17:57.358 "driver_specific": { 00:17:57.358 "ftl": { 00:17:57.358 "base_bdev": "6cc14fd7-5453-496d-99b5-1a3d0a9c13ce", 00:17:57.358 "cache": "nvc0n1p0" 00:17:57.358 } 00:17:57.358 } 00:17:57.358 } 00:17:57.358 ] 00:17:57.358 13:30:45 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:57.358 13:30:45 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:57.358 13:30:45 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:57.616 13:30:46 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:57.617 13:30:46 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:57.876 [2024-11-26 13:30:46.292502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.876 [2024-11-26 13:30:46.292535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.876 [2024-11-26 13:30:46.292545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:57.876 [2024-11-26 13:30:46.292553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.876 [2024-11-26 13:30:46.292580] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.876 [2024-11-26 13:30:46.294854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.876 [2024-11-26 13:30:46.294879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.876 [2024-11-26 13:30:46.294889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.258 ms 00:17:57.876 [2024-11-26 13:30:46.294896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.876 [2024-11-26 13:30:46.295291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.876 [2024-11-26 13:30:46.295310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.876 [2024-11-26 13:30:46.295319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:17:57.876 [2024-11-26 13:30:46.295325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.876 [2024-11-26 13:30:46.297799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.876 [2024-11-26 13:30:46.297816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.876 [2024-11-26 13:30:46.297825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:17:57.876 [2024-11-26 13:30:46.297833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.876 [2024-11-26 13:30:46.302559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.876 [2024-11-26 13:30:46.302582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:57.876 [2024-11-26 13:30:46.302591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.701 ms 00:17:57.876 [2024-11-26 13:30:46.302598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.876 [2024-11-26 13:30:46.319675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.876 [2024-11-26 13:30:46.319701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.876 [2024-11-26 13:30:46.319722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.019 ms 00:17:57.876 [2024-11-26 13:30:46.319729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.876 [2024-11-26 13:30:46.331637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.876 [2024-11-26 13:30:46.331664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.876 [2024-11-26 13:30:46.331677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.867 ms 00:17:57.876 [2024-11-26 13:30:46.331684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.876 [2024-11-26 13:30:46.331835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.876 [2024-11-26 13:30:46.331875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.876 [2024-11-26 13:30:46.331884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:57.876 [2024-11-26 13:30:46.331890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.876 [2024-11-26 13:30:46.349351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.876 [2024-11-26 13:30:46.349375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:57.876 [2024-11-26 13:30:46.349385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.436 ms 00:17:57.876 [2024-11-26 13:30:46.349390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.876 [2024-11-26 13:30:46.366655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.876 [2024-11-26 13:30:46.366678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:57.876 [2024-11-26 13:30:46.366687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.227 ms 00:17:57.876 [2024-11-26 13:30:46.366693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.876 [2024-11-26 13:30:46.383953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.877 [2024-11-26 13:30:46.383978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.877 [2024-11-26 13:30:46.383988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.220 ms 00:17:57.877 [2024-11-26 13:30:46.383993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.877 [2024-11-26 13:30:46.400898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.877 [2024-11-26 13:30:46.400923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.877 [2024-11-26 13:30:46.400932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.818 ms 00:17:57.877 [2024-11-26 13:30:46.400938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.877 [2024-11-26 13:30:46.400973] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.877 [2024-11-26 13:30:46.400985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.400995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.877 [2024-11-26 13:30:46.401563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.878 [2024-11-26 13:30:46.401696] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.878 [2024-11-26 13:30:46.401705] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11adbe0d-52d5-4e2d-b737-b10f347c7e6f 00:17:57.878 [2024-11-26 13:30:46.401712] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.878 [2024-11-26 13:30:46.401721] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.878 [2024-11-26 13:30:46.401729] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.878 [2024-11-26 13:30:46.401736] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.878 [2024-11-26 13:30:46.401742] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.878 [2024-11-26 13:30:46.401749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.878 [2024-11-26 13:30:46.401756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.878 [2024-11-26 13:30:46.401763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.878 [2024-11-26 13:30:46.401768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.878 [2024-11-26 13:30:46.401775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.878 [2024-11-26 13:30:46.401781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.878 [2024-11-26 13:30:46.401788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:17:57.878 [2024-11-26 13:30:46.401794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.878 [2024-11-26 13:30:46.411374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.878 [2024-11-26 13:30:46.411400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.878 [2024-11-26 13:30:46.411410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.549 ms 00:17:57.878 [2024-11-26 13:30:46.411416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.878 [2024-11-26 13:30:46.411709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.878 [2024-11-26 13:30:46.411723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.878 [2024-11-26 13:30:46.411731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:57.878 [2024-11-26 13:30:46.411737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.137 [2024-11-26 13:30:46.448191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.137 [2024-11-26 13:30:46.448219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.137 [2024-11-26 13:30:46.448230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.137 [2024-11-26 13:30:46.448236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.137 [2024-11-26 13:30:46.448297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.137 [2024-11-26 13:30:46.448304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.137 [2024-11-26 13:30:46.448312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.137 [2024-11-26 13:30:46.448318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.137 [2024-11-26 13:30:46.448389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.137 [2024-11-26 13:30:46.448399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.137 [2024-11-26 13:30:46.448408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.137 [2024-11-26 13:30:46.448414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.137 [2024-11-26 13:30:46.448437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.137 [2024-11-26 13:30:46.448454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.137 [2024-11-26 13:30:46.448462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.137 [2024-11-26 13:30:46.448468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.137 [2024-11-26 13:30:46.515310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.137 [2024-11-26 13:30:46.515344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.137 [2024-11-26 13:30:46.515355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.137 [2024-11-26 13:30:46.515362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.137 [2024-11-26 13:30:46.567066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.137 [2024-11-26 13:30:46.567100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.137 [2024-11-26 13:30:46.567112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.137 [2024-11-26 13:30:46.567119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.137 [2024-11-26 13:30:46.567210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.137 [2024-11-26 13:30:46.567219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.137 [2024-11-26 13:30:46.567229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.137 [2024-11-26 13:30:46.567235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.137 [2024-11-26 13:30:46.567301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.137 [2024-11-26 13:30:46.567309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.137 [2024-11-26 13:30:46.567317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.138 [2024-11-26 13:30:46.567322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.138 [2024-11-26 13:30:46.567421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.138 [2024-11-26 13:30:46.567433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.138 [2024-11-26 13:30:46.567452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.138 [2024-11-26 13:30:46.567460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.138 [2024-11-26 13:30:46.567513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.138 [2024-11-26 13:30:46.567521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:58.138 [2024-11-26 13:30:46.567528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.138 [2024-11-26 13:30:46.567535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.138 [2024-11-26 13:30:46.567580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.138 [2024-11-26 13:30:46.567588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.138 [2024-11-26 13:30:46.567597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.138 [2024-11-26 13:30:46.567604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.138 [2024-11-26 13:30:46.567655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.138 [2024-11-26 13:30:46.567664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.138 [2024-11-26 13:30:46.567672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.138 [2024-11-26 13:30:46.567678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.138 [2024-11-26 13:30:46.567832] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.298 ms, result 0 00:17:58.138 true 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75124 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75124 ']' 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75124 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75124 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75124' 00:17:58.138 killing process with pid 75124 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75124 00:17:58.138 13:30:46 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75124 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:08.124 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.125 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.125 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:08.125 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:08.125 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:08.125 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:08.125 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:08.125 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:08.125 13:30:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:08.125 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:08.125 fio-3.35 00:18:08.125 Starting 1 thread 00:18:12.538 00:18:12.538 test: (groupid=0, jobs=1): err= 0: pid=75303: Tue Nov 26 13:31:00 2024 00:18:12.538 read: IOPS=942, BW=62.6MiB/s (65.6MB/s)(255MiB/4069msec) 00:18:12.538 slat (nsec): min=2937, max=20899, avg=4359.84, stdev=2000.18 00:18:12.538 clat (usec): min=238, max=1772, avg=480.40, stdev=195.56 00:18:12.538 lat (usec): min=241, max=1777, avg=484.76, stdev=196.24 00:18:12.538 clat percentiles (usec): 00:18:12.538 | 1.00th=[ 285], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:18:12.538 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 396], 60.00th=[ 461], 00:18:12.538 | 70.00th=[ 529], 80.00th=[ 660], 90.00th=[ 824], 95.00th=[ 865], 00:18:12.538 | 99.00th=[ 947], 99.50th=[ 1057], 99.90th=[ 1172], 99.95th=[ 1336], 00:18:12.538 | 99.99th=[ 1778] 00:18:12.538 write: IOPS=948, BW=63.0MiB/s (66.1MB/s)(256MiB/4065msec); 0 zone resets 00:18:12.538 slat (nsec): min=13439, max=48130, avg=18390.32, stdev=3881.48 00:18:12.538 clat (usec): min=291, max=1793, avg=542.12, stdev=241.43 00:18:12.538 lat (usec): min=309, max=1814, avg=560.51, stdev=242.83 00:18:12.538 clat percentiles (usec): 00:18:12.538 | 1.00th=[ 306], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:18:12.538 | 30.00th=[ 351], 40.00th=[ 379], 50.00th=[ 478], 60.00th=[ 537], 00:18:12.538 | 70.00th=[ 611], 80.00th=[ 750], 90.00th=[ 889], 95.00th=[ 947], 00:18:12.538 | 99.00th=[ 1516], 99.50th=[ 1565], 99.90th=[ 1745], 99.95th=[ 1762], 00:18:12.538 | 99.99th=[ 1795] 00:18:12.538 bw ( KiB/s): min=40120, max=90440, per=99.39%, avg=64113.50, stdev=20132.96, samples=8 00:18:12.538 iops : min= 590, max= 1330, avg=942.75, stdev=296.19, samples=8 00:18:12.538 lat (usec) : 250=0.05%, 500=61.31%, 750=19.96%, 1000=16.95% 00:18:12.538 lat (msec) : 2=1.73% 00:18:12.538 cpu : usr=99.29%, sys=0.05%, ctx=6, majf=0, minf=1169 00:18:12.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:12.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.538 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:12.538 00:18:12.538 Run status group 0 (all jobs): 00:18:12.538 READ: bw=62.6MiB/s (65.6MB/s), 62.6MiB/s-62.6MiB/s (65.6MB/s-65.6MB/s), io=255MiB (267MB), run=4069-4069msec 00:18:12.538 WRITE: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=256MiB (269MB), run=4065-4065msec 00:18:14.009 ----------------------------------------------------- 00:18:14.009 Suppressions used: 00:18:14.009 count bytes template 00:18:14.009 1 5 /usr/src/fio/parse.c 00:18:14.009 1 8 libtcmalloc_minimal.so 00:18:14.009 1 904 libcrypto.so 00:18:14.009 ----------------------------------------------------- 00:18:14.009 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:14.009 13:31:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:14.009 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:14.009 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:14.009 fio-3.35 00:18:14.009 Starting 2 threads 00:18:40.584 00:18:40.584 first_half: (groupid=0, jobs=1): err= 0: pid=75406: Tue Nov 26 13:31:26 2024 00:18:40.584 read: IOPS=2849, BW=11.1MiB/s (11.7MB/s)(255MiB/22894msec) 00:18:40.584 slat (nsec): min=3027, max=48505, avg=4884.93, stdev=1066.06 00:18:40.584 clat (usec): min=635, max=376529, avg=33289.39, stdev=18414.97 00:18:40.584 lat (usec): min=642, max=376533, avg=33294.27, stdev=18415.04 00:18:40.584 clat percentiles (msec): 00:18:40.584 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 30], 00:18:40.584 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:18:40.584 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 43], 00:18:40.584 | 99.00th=[ 127], 99.50th=[ 146], 99.90th=[ 268], 99.95th=[ 334], 00:18:40.584 | 99.99th=[ 372] 00:18:40.584 write: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(256MiB/19435msec); 0 zone resets 00:18:40.584 slat (usec): min=3, max=440, avg= 6.75, stdev= 4.46 00:18:40.584 clat (usec): min=360, max=109496, avg=11511.96, stdev=20046.05 00:18:40.584 lat (usec): min=366, max=109505, avg=11518.71, stdev=20046.30 00:18:40.584 clat percentiles (usec): 00:18:40.584 | 1.00th=[ 668], 5.00th=[ 799], 10.00th=[ 1004], 20.00th=[ 1516], 00:18:40.584 | 30.00th=[ 2802], 40.00th=[ 4228], 50.00th=[ 5014], 60.00th=[ 5538], 00:18:40.584 | 70.00th=[ 6521], 80.00th=[ 10159], 90.00th=[ 29754], 95.00th=[ 67634], 00:18:40.584 | 99.00th=[ 90702], 99.50th=[ 94897], 99.90th=[100140], 99.95th=[102237], 00:18:40.584 | 99.99th=[108528] 00:18:40.584 bw ( KiB/s): min= 968, max=42688, per=80.96%, avg=21839.13, stdev=12729.09, samples=24 00:18:40.584 iops : min= 242, max=10672, avg=5459.75, stdev=3182.28, samples=24 00:18:40.584 lat (usec) : 500=0.02%, 750=1.71%, 1000=3.28% 00:18:40.584 lat (msec) : 2=7.28%, 4=7.18%, 10=20.82%, 20=6.19%, 50=47.17% 00:18:40.584 lat (msec) : 100=5.42%, 250=0.88%, 500=0.06% 00:18:40.584 cpu : usr=99.25%, sys=0.11%, ctx=42, majf=0, minf=5559 00:18:40.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:40.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.584 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:40.584 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:40.584 second_half: (groupid=0, jobs=1): err= 0: pid=75407: Tue Nov 26 13:31:26 2024 00:18:40.584 read: IOPS=2880, BW=11.3MiB/s (11.8MB/s)(254MiB/22611msec) 00:18:40.584 slat (nsec): min=3042, max=33817, avg=4037.58, stdev=934.36 00:18:40.584 clat (usec): min=645, max=377904, avg=33763.93, stdev=16848.79 00:18:40.584 lat (usec): min=649, max=377908, avg=33767.97, stdev=16848.89 00:18:40.584 clat percentiles (msec): 00:18:40.584 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:18:40.584 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:18:40.584 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 46], 00:18:40.584 | 99.00th=[ 124], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 264], 00:18:40.584 | 99.99th=[ 376] 00:18:40.584 write: IOPS=4469, BW=17.5MiB/s (18.3MB/s)(256MiB/14662msec); 0 zone resets 00:18:40.584 slat (usec): min=3, max=140, avg= 5.75, stdev= 2.47 00:18:40.584 clat (usec): min=368, max=110011, avg=10583.84, stdev=19808.81 00:18:40.584 lat (usec): min=374, max=110016, avg=10589.59, stdev=19808.81 00:18:40.584 clat percentiles (usec): 00:18:40.584 | 1.00th=[ 709], 5.00th=[ 889], 10.00th=[ 1037], 20.00th=[ 1270], 00:18:40.584 | 30.00th=[ 1729], 40.00th=[ 2999], 50.00th=[ 4047], 60.00th=[ 5080], 00:18:40.584 | 70.00th=[ 6063], 80.00th=[ 10159], 90.00th=[ 16188], 95.00th=[ 67634], 00:18:40.584 | 99.00th=[ 89654], 99.50th=[ 94897], 99.90th=[103285], 99.95th=[105382], 00:18:40.584 | 99.99th=[109577] 00:18:40.584 bw ( KiB/s): min= 4192, max=40864, per=100.00%, avg=27589.26, stdev=10152.25, samples=19 00:18:40.584 iops : min= 1048, max=10216, avg=6897.32, stdev=2538.06, samples=19 00:18:40.584 lat (usec) : 500=0.02%, 750=0.86%, 1000=3.38% 00:18:40.584 lat (msec) : 2=12.44%, 4=8.63%, 10=15.29%, 20=5.91%, 50=46.76% 00:18:40.584 lat (msec) : 100=5.77%, 250=0.91%, 500=0.03% 00:18:40.584 cpu : usr=99.39%, sys=0.10%, ctx=41, majf=0, minf=5558 00:18:40.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:40.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.584 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:40.584 issued rwts: total=65142,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:40.584 00:18:40.584 Run status group 0 (all jobs): 00:18:40.584 READ: bw=22.2MiB/s (23.3MB/s), 11.1MiB/s-11.3MiB/s (11.7MB/s-11.8MB/s), io=509MiB (534MB), run=22611-22894msec 00:18:40.584 WRITE: bw=26.3MiB/s (27.6MB/s), 13.2MiB/s-17.5MiB/s (13.8MB/s-18.3MB/s), io=512MiB (537MB), run=14662-19435msec 00:18:40.584 ----------------------------------------------------- 00:18:40.584 Suppressions used: 00:18:40.584 count bytes template 00:18:40.584 2 10 /usr/src/fio/parse.c 00:18:40.584 3 288 /usr/src/fio/iolog.c 00:18:40.584 1 8 libtcmalloc_minimal.so 00:18:40.584 1 904 libcrypto.so 00:18:40.584 ----------------------------------------------------- 00:18:40.584 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:40.584 13:31:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:40.584 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:40.584 fio-3.35 00:18:40.584 Starting 1 thread 00:19:02.553 00:19:02.553 test: (groupid=0, jobs=1): err= 0: pid=75710: Tue Nov 26 13:31:48 2024 00:19:02.553 read: IOPS=5791, BW=22.6MiB/s (23.7MB/s)(255MiB/11259msec) 00:19:02.553 slat (usec): min=3, max=170, avg= 5.60, stdev= 2.19 00:19:02.553 clat (usec): min=1441, max=40749, avg=22093.85, stdev=3576.76 00:19:02.553 lat (usec): min=1445, max=40756, avg=22099.45, stdev=3577.16 00:19:02.553 clat percentiles (usec): 00:19:02.553 | 1.00th=[15926], 5.00th=[16909], 10.00th=[17957], 20.00th=[19006], 00:19:02.553 | 30.00th=[20055], 40.00th=[20841], 50.00th=[21627], 60.00th=[22676], 00:19:02.553 | 70.00th=[23462], 80.00th=[24773], 90.00th=[26870], 95.00th=[28443], 00:19:02.553 | 99.00th=[32637], 99.50th=[33817], 99.90th=[37487], 99.95th=[38536], 00:19:02.553 | 99.99th=[40633] 00:19:02.553 write: IOPS=8439, BW=33.0MiB/s (34.6MB/s)(256MiB/7765msec); 0 zone resets 00:19:02.553 slat (usec): min=4, max=2088, avg= 8.58, stdev=15.92 00:19:02.553 clat (usec): min=798, max=101262, avg=15088.24, stdev=19249.58 00:19:02.553 lat (usec): min=806, max=101270, avg=15096.82, stdev=19249.74 00:19:02.553 clat percentiles (usec): 00:19:02.553 | 1.00th=[ 1369], 5.00th=[ 1680], 10.00th=[ 1893], 20.00th=[ 2212], 00:19:02.553 | 30.00th=[ 2606], 40.00th=[ 3556], 50.00th=[ 8979], 60.00th=[10945], 00:19:02.553 | 70.00th=[12780], 80.00th=[15401], 90.00th=[56361], 95.00th=[59507], 00:19:02.553 | 99.00th=[65274], 99.50th=[66847], 99.90th=[77071], 99.95th=[83362], 00:19:02.553 | 99.99th=[93848] 00:19:02.553 bw ( KiB/s): min=14176, max=51096, per=97.06%, avg=32768.00, stdev=9293.30, samples=16 00:19:02.553 iops : min= 3544, max=12774, avg=8192.00, stdev=2323.33, samples=16 00:19:02.553 lat (usec) : 1000=0.03% 00:19:02.553 lat (msec) : 2=6.61%, 4=13.90%, 10=7.27%, 20=29.19%, 50=35.39% 00:19:02.553 lat (msec) : 100=7.62%, 250=0.01% 00:19:02.553 cpu : usr=97.80%, sys=0.49%, ctx=54, majf=0, minf=5565 00:19:02.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:02.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.553 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.553 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.553 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.553 00:19:02.553 Run status group 0 (all jobs): 00:19:02.553 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=255MiB (267MB), run=11259-11259msec 00:19:02.553 WRITE: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=256MiB (268MB), run=7765-7765msec 00:19:02.553 ----------------------------------------------------- 00:19:02.553 Suppressions used: 00:19:02.553 count bytes template 00:19:02.553 1 5 /usr/src/fio/parse.c 00:19:02.553 2 192 /usr/src/fio/iolog.c 00:19:02.553 1 8 libtcmalloc_minimal.so 00:19:02.553 1 904 libcrypto.so 00:19:02.553 ----------------------------------------------------- 00:19:02.553 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:02.553 Remove shared memory files 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57091 /dev/shm/spdk_tgt_trace.pid74050 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:02.553 ************************************ 00:19:02.553 END TEST ftl_fio_basic 00:19:02.553 ************************************ 00:19:02.553 00:19:02.553 real 1m11.201s 00:19:02.553 user 2m35.941s 00:19:02.553 sys 0m2.960s 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.553 13:31:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:02.553 13:31:50 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:02.553 13:31:50 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:02.553 13:31:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.553 13:31:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:02.553 ************************************ 00:19:02.553 START TEST ftl_bdevperf 00:19:02.553 ************************************ 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:02.553 * Looking for test storage... 00:19:02.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:02.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.553 --rc genhtml_branch_coverage=1 00:19:02.553 --rc genhtml_function_coverage=1 00:19:02.553 --rc genhtml_legend=1 00:19:02.553 --rc geninfo_all_blocks=1 00:19:02.553 --rc geninfo_unexecuted_blocks=1 00:19:02.553 00:19:02.553 ' 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:02.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.553 --rc genhtml_branch_coverage=1 00:19:02.553 --rc genhtml_function_coverage=1 00:19:02.553 --rc genhtml_legend=1 00:19:02.553 --rc geninfo_all_blocks=1 00:19:02.553 --rc geninfo_unexecuted_blocks=1 00:19:02.553 00:19:02.553 ' 00:19:02.553 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:02.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.554 --rc genhtml_branch_coverage=1 00:19:02.554 --rc genhtml_function_coverage=1 00:19:02.554 --rc genhtml_legend=1 00:19:02.554 --rc geninfo_all_blocks=1 00:19:02.554 --rc geninfo_unexecuted_blocks=1 00:19:02.554 00:19:02.554 ' 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:02.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.554 --rc genhtml_branch_coverage=1 00:19:02.554 --rc genhtml_function_coverage=1 00:19:02.554 --rc genhtml_legend=1 00:19:02.554 --rc geninfo_all_blocks=1 00:19:02.554 --rc geninfo_unexecuted_blocks=1 00:19:02.554 00:19:02.554 ' 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76002 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76002 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76002 ']' 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.554 13:31:50 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:02.554 [2024-11-26 13:31:50.634684] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:19:02.554 [2024-11-26 13:31:50.635051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76002 ] 00:19:02.554 [2024-11-26 13:31:50.793260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.554 [2024-11-26 13:31:50.926370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.123 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.123 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:03.123 13:31:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:03.123 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:03.124 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:03.124 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:03.124 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:03.124 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:03.385 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:03.385 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:03.385 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:03.385 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:03.385 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:03.385 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:03.385 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:03.385 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:03.385 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:03.385 { 00:19:03.385 "name": "nvme0n1", 00:19:03.385 "aliases": [ 00:19:03.385 "58236447-8dce-40ac-8e13-8c07162224a6" 00:19:03.385 ], 00:19:03.385 "product_name": "NVMe disk", 00:19:03.385 "block_size": 4096, 00:19:03.385 "num_blocks": 1310720, 00:19:03.385 "uuid": "58236447-8dce-40ac-8e13-8c07162224a6", 00:19:03.385 "numa_id": -1, 00:19:03.385 "assigned_rate_limits": { 00:19:03.385 "rw_ios_per_sec": 0, 00:19:03.385 "rw_mbytes_per_sec": 0, 00:19:03.385 "r_mbytes_per_sec": 0, 00:19:03.385 "w_mbytes_per_sec": 0 00:19:03.385 }, 00:19:03.385 "claimed": true, 00:19:03.385 "claim_type": "read_many_write_one", 00:19:03.385 "zoned": false, 00:19:03.385 "supported_io_types": { 00:19:03.385 "read": true, 00:19:03.385 "write": true, 00:19:03.385 "unmap": true, 00:19:03.385 "flush": true, 00:19:03.385 "reset": true, 00:19:03.385 "nvme_admin": true, 00:19:03.385 "nvme_io": true, 00:19:03.385 "nvme_io_md": false, 00:19:03.385 "write_zeroes": true, 00:19:03.385 "zcopy": false, 00:19:03.385 "get_zone_info": false, 00:19:03.385 "zone_management": false, 00:19:03.385 "zone_append": false, 00:19:03.385 "compare": true, 00:19:03.385 "compare_and_write": false, 00:19:03.385 "abort": true, 00:19:03.385 "seek_hole": false, 00:19:03.385 "seek_data": false, 00:19:03.385 "copy": true, 00:19:03.385 "nvme_iov_md": false 00:19:03.385 }, 00:19:03.385 "driver_specific": { 00:19:03.385 "nvme": [ 00:19:03.385 { 00:19:03.385 "pci_address": "0000:00:11.0", 00:19:03.385 "trid": { 00:19:03.385 "trtype": "PCIe", 00:19:03.385 "traddr": "0000:00:11.0" 00:19:03.385 }, 00:19:03.385 "ctrlr_data": { 00:19:03.385 "cntlid": 0, 00:19:03.385 "vendor_id": "0x1b36", 00:19:03.385 "model_number": "QEMU NVMe Ctrl", 00:19:03.385 "serial_number": "12341", 00:19:03.385 "firmware_revision": "8.0.0", 00:19:03.385 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:03.385 "oacs": { 00:19:03.385 "security": 0, 00:19:03.385 "format": 1, 00:19:03.385 "firmware": 0, 00:19:03.385 "ns_manage": 1 00:19:03.385 }, 00:19:03.385 "multi_ctrlr": false, 00:19:03.385 "ana_reporting": false 00:19:03.385 }, 00:19:03.385 "vs": { 00:19:03.386 "nvme_version": "1.4" 00:19:03.386 }, 00:19:03.386 "ns_data": { 00:19:03.386 "id": 1, 00:19:03.386 "can_share": false 00:19:03.386 } 00:19:03.386 } 00:19:03.386 ], 00:19:03.386 "mp_policy": "active_passive" 00:19:03.386 } 00:19:03.386 } 00:19:03.386 ]' 00:19:03.386 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:03.646 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:03.646 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:03.646 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:03.646 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:03.646 13:31:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:03.646 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:03.646 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:03.646 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:03.646 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:03.646 13:31:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:03.905 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=f1ce474d-dc0a-459f-852f-29a83cdd5754 00:19:03.905 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:03.905 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1ce474d-dc0a-459f-852f-29a83cdd5754 00:19:03.905 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:04.166 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=defc29d6-eaff-4282-994c-8c169f041477 00:19:04.166 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u defc29d6-eaff-4282-994c-8c169f041477 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:04.428 13:31:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:04.689 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:04.689 { 00:19:04.689 "name": "dac71307-46eb-4726-8f9a-9f4b24faab4f", 00:19:04.689 "aliases": [ 00:19:04.689 "lvs/nvme0n1p0" 00:19:04.689 ], 00:19:04.689 "product_name": "Logical Volume", 00:19:04.689 "block_size": 4096, 00:19:04.689 "num_blocks": 26476544, 00:19:04.689 "uuid": "dac71307-46eb-4726-8f9a-9f4b24faab4f", 00:19:04.689 "assigned_rate_limits": { 00:19:04.689 "rw_ios_per_sec": 0, 00:19:04.689 "rw_mbytes_per_sec": 0, 00:19:04.689 "r_mbytes_per_sec": 0, 00:19:04.689 "w_mbytes_per_sec": 0 00:19:04.689 }, 00:19:04.689 "claimed": false, 00:19:04.689 "zoned": false, 00:19:04.689 "supported_io_types": { 00:19:04.689 "read": true, 00:19:04.689 "write": true, 00:19:04.689 "unmap": true, 00:19:04.689 "flush": false, 00:19:04.689 "reset": true, 00:19:04.689 "nvme_admin": false, 00:19:04.689 "nvme_io": false, 00:19:04.689 "nvme_io_md": false, 00:19:04.689 "write_zeroes": true, 00:19:04.689 "zcopy": false, 00:19:04.689 "get_zone_info": false, 00:19:04.689 "zone_management": false, 00:19:04.689 "zone_append": false, 00:19:04.689 "compare": false, 00:19:04.689 "compare_and_write": false, 00:19:04.689 "abort": false, 00:19:04.689 "seek_hole": true, 00:19:04.689 "seek_data": true, 00:19:04.689 "copy": false, 00:19:04.689 "nvme_iov_md": false 00:19:04.689 }, 00:19:04.689 "driver_specific": { 00:19:04.689 "lvol": { 00:19:04.689 "lvol_store_uuid": "defc29d6-eaff-4282-994c-8c169f041477", 00:19:04.689 "base_bdev": "nvme0n1", 00:19:04.689 "thin_provision": true, 00:19:04.689 "num_allocated_clusters": 0, 00:19:04.689 "snapshot": false, 00:19:04.689 "clone": false, 00:19:04.689 "esnap_clone": false 00:19:04.689 } 00:19:04.689 } 00:19:04.689 } 00:19:04.689 ]' 00:19:04.689 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:04.689 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:04.689 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:04.689 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:04.689 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:04.689 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:04.689 13:31:53 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:04.689 13:31:53 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:04.689 13:31:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:04.949 13:31:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:04.949 13:31:53 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:04.949 13:31:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:04.949 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:04.949 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:04.949 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:04.949 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:04.949 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:05.210 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:05.210 { 00:19:05.210 "name": "dac71307-46eb-4726-8f9a-9f4b24faab4f", 00:19:05.210 "aliases": [ 00:19:05.210 "lvs/nvme0n1p0" 00:19:05.210 ], 00:19:05.210 "product_name": "Logical Volume", 00:19:05.210 "block_size": 4096, 00:19:05.210 "num_blocks": 26476544, 00:19:05.210 "uuid": "dac71307-46eb-4726-8f9a-9f4b24faab4f", 00:19:05.210 "assigned_rate_limits": { 00:19:05.210 "rw_ios_per_sec": 0, 00:19:05.210 "rw_mbytes_per_sec": 0, 00:19:05.210 "r_mbytes_per_sec": 0, 00:19:05.210 "w_mbytes_per_sec": 0 00:19:05.210 }, 00:19:05.210 "claimed": false, 00:19:05.210 "zoned": false, 00:19:05.210 "supported_io_types": { 00:19:05.210 "read": true, 00:19:05.210 "write": true, 00:19:05.210 "unmap": true, 00:19:05.210 "flush": false, 00:19:05.210 "reset": true, 00:19:05.210 "nvme_admin": false, 00:19:05.210 "nvme_io": false, 00:19:05.210 "nvme_io_md": false, 00:19:05.210 "write_zeroes": true, 00:19:05.210 "zcopy": false, 00:19:05.210 "get_zone_info": false, 00:19:05.210 "zone_management": false, 00:19:05.210 "zone_append": false, 00:19:05.210 "compare": false, 00:19:05.210 "compare_and_write": false, 00:19:05.210 "abort": false, 00:19:05.210 "seek_hole": true, 00:19:05.210 "seek_data": true, 00:19:05.210 "copy": false, 00:19:05.210 "nvme_iov_md": false 00:19:05.210 }, 00:19:05.210 "driver_specific": { 00:19:05.210 "lvol": { 00:19:05.210 "lvol_store_uuid": "defc29d6-eaff-4282-994c-8c169f041477", 00:19:05.210 "base_bdev": "nvme0n1", 00:19:05.210 "thin_provision": true, 00:19:05.210 "num_allocated_clusters": 0, 00:19:05.210 "snapshot": false, 00:19:05.210 "clone": false, 00:19:05.210 "esnap_clone": false 00:19:05.210 } 00:19:05.210 } 00:19:05.210 } 00:19:05.210 ]' 00:19:05.210 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:05.210 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:05.210 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:05.210 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:05.210 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:05.210 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:05.210 13:31:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:05.210 13:31:53 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:05.472 13:31:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:05.472 13:31:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:05.472 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:05.472 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:05.472 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:05.472 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:05.472 13:31:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dac71307-46eb-4726-8f9a-9f4b24faab4f 00:19:05.733 13:31:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:05.733 { 00:19:05.733 "name": "dac71307-46eb-4726-8f9a-9f4b24faab4f", 00:19:05.733 "aliases": [ 00:19:05.733 "lvs/nvme0n1p0" 00:19:05.733 ], 00:19:05.733 "product_name": "Logical Volume", 00:19:05.733 "block_size": 4096, 00:19:05.733 "num_blocks": 26476544, 00:19:05.733 "uuid": "dac71307-46eb-4726-8f9a-9f4b24faab4f", 00:19:05.733 "assigned_rate_limits": { 00:19:05.733 "rw_ios_per_sec": 0, 00:19:05.733 "rw_mbytes_per_sec": 0, 00:19:05.733 "r_mbytes_per_sec": 0, 00:19:05.733 "w_mbytes_per_sec": 0 00:19:05.733 }, 00:19:05.733 "claimed": false, 00:19:05.733 "zoned": false, 00:19:05.733 "supported_io_types": { 00:19:05.733 "read": true, 00:19:05.733 "write": true, 00:19:05.733 "unmap": true, 00:19:05.733 "flush": false, 00:19:05.733 "reset": true, 00:19:05.733 "nvme_admin": false, 00:19:05.733 "nvme_io": false, 00:19:05.733 "nvme_io_md": false, 00:19:05.733 "write_zeroes": true, 00:19:05.733 "zcopy": false, 00:19:05.733 "get_zone_info": false, 00:19:05.733 "zone_management": false, 00:19:05.733 "zone_append": false, 00:19:05.733 "compare": false, 00:19:05.733 "compare_and_write": false, 00:19:05.733 "abort": false, 00:19:05.733 "seek_hole": true, 00:19:05.733 "seek_data": true, 00:19:05.734 "copy": false, 00:19:05.734 "nvme_iov_md": false 00:19:05.734 }, 00:19:05.734 "driver_specific": { 00:19:05.734 "lvol": { 00:19:05.734 "lvol_store_uuid": "defc29d6-eaff-4282-994c-8c169f041477", 00:19:05.734 "base_bdev": "nvme0n1", 00:19:05.734 "thin_provision": true, 00:19:05.734 "num_allocated_clusters": 0, 00:19:05.734 "snapshot": false, 00:19:05.734 "clone": false, 00:19:05.734 "esnap_clone": false 00:19:05.734 } 00:19:05.734 } 00:19:05.734 } 00:19:05.734 ]' 00:19:05.734 13:31:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:05.734 13:31:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:05.734 13:31:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:05.734 13:31:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:05.734 13:31:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:05.734 13:31:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:05.734 13:31:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:05.734 13:31:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dac71307-46eb-4726-8f9a-9f4b24faab4f -c nvc0n1p0 --l2p_dram_limit 20 00:19:05.995 [2024-11-26 13:31:54.344558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.995 [2024-11-26 13:31:54.344623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:05.995 [2024-11-26 13:31:54.344637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:05.995 [2024-11-26 13:31:54.344648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.995 [2024-11-26 13:31:54.344705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.995 [2024-11-26 13:31:54.344718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:05.995 [2024-11-26 13:31:54.344725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:05.995 [2024-11-26 13:31:54.344734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.995 [2024-11-26 13:31:54.344749] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:05.995 [2024-11-26 13:31:54.345387] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:05.995 [2024-11-26 13:31:54.345405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.995 [2024-11-26 13:31:54.345414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:05.995 [2024-11-26 13:31:54.345422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:19:05.995 [2024-11-26 13:31:54.345430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.995 [2024-11-26 13:31:54.345484] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 78f5071e-1aec-42c4-b06b-8babeb511e18 00:19:05.995 [2024-11-26 13:31:54.347264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.995 [2024-11-26 13:31:54.347318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:05.995 [2024-11-26 13:31:54.347333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:05.995 [2024-11-26 13:31:54.347340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.995 [2024-11-26 13:31:54.355912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.995 [2024-11-26 13:31:54.355961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:05.995 [2024-11-26 13:31:54.355974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.495 ms 00:19:05.995 [2024-11-26 13:31:54.355981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.995 [2024-11-26 13:31:54.356077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.995 [2024-11-26 13:31:54.356086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:05.996 [2024-11-26 13:31:54.356100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:05.996 [2024-11-26 13:31:54.356107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.996 [2024-11-26 13:31:54.356158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.996 [2024-11-26 13:31:54.356166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:05.996 [2024-11-26 13:31:54.356175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:05.996 [2024-11-26 13:31:54.356181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.996 [2024-11-26 13:31:54.356201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:05.996 [2024-11-26 13:31:54.359971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.996 [2024-11-26 13:31:54.360015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:05.996 [2024-11-26 13:31:54.360024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.777 ms 00:19:05.996 [2024-11-26 13:31:54.360036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.996 [2024-11-26 13:31:54.360069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.996 [2024-11-26 13:31:54.360077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:05.996 [2024-11-26 13:31:54.360085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:05.996 [2024-11-26 13:31:54.360093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.996 [2024-11-26 13:31:54.360130] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:05.996 [2024-11-26 13:31:54.360252] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:05.996 [2024-11-26 13:31:54.360263] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:05.996 [2024-11-26 13:31:54.360276] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:05.996 [2024-11-26 13:31:54.360284] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360294] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360300] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:05.996 [2024-11-26 13:31:54.360308] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:05.996 [2024-11-26 13:31:54.360316] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:05.996 [2024-11-26 13:31:54.360324] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:05.996 [2024-11-26 13:31:54.360330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.996 [2024-11-26 13:31:54.360341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:05.996 [2024-11-26 13:31:54.360348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:19:05.996 [2024-11-26 13:31:54.360357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.996 [2024-11-26 13:31:54.360420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.996 [2024-11-26 13:31:54.360429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:05.996 [2024-11-26 13:31:54.360435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:05.996 [2024-11-26 13:31:54.360458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.996 [2024-11-26 13:31:54.360530] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:05.996 [2024-11-26 13:31:54.360541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:05.996 [2024-11-26 13:31:54.360550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:05.996 [2024-11-26 13:31:54.360573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:05.996 [2024-11-26 13:31:54.360590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:05.996 [2024-11-26 13:31:54.360603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:05.996 [2024-11-26 13:31:54.360617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:05.996 [2024-11-26 13:31:54.360622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:05.996 [2024-11-26 13:31:54.360629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:05.996 [2024-11-26 13:31:54.360634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:05.996 [2024-11-26 13:31:54.360644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:05.996 [2024-11-26 13:31:54.360657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:05.996 [2024-11-26 13:31:54.360674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:05.996 [2024-11-26 13:31:54.360696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:05.996 [2024-11-26 13:31:54.360713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:05.996 [2024-11-26 13:31:54.360732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:05.996 [2024-11-26 13:31:54.360752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:05.996 [2024-11-26 13:31:54.360765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:05.996 [2024-11-26 13:31:54.360771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:05.996 [2024-11-26 13:31:54.360777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:05.996 [2024-11-26 13:31:54.360784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:05.996 [2024-11-26 13:31:54.360789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:05.996 [2024-11-26 13:31:54.360796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:05.996 [2024-11-26 13:31:54.360809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:05.996 [2024-11-26 13:31:54.360814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360821] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:05.996 [2024-11-26 13:31:54.360828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:05.996 [2024-11-26 13:31:54.360835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.996 [2024-11-26 13:31:54.360851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:05.996 [2024-11-26 13:31:54.360856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:05.996 [2024-11-26 13:31:54.360863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:05.996 [2024-11-26 13:31:54.360869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:05.996 [2024-11-26 13:31:54.360875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:05.996 [2024-11-26 13:31:54.360880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:05.996 [2024-11-26 13:31:54.360892] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:05.996 [2024-11-26 13:31:54.360900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:05.996 [2024-11-26 13:31:54.360910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:05.996 [2024-11-26 13:31:54.360915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:05.996 [2024-11-26 13:31:54.360923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:05.996 [2024-11-26 13:31:54.360929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:05.996 [2024-11-26 13:31:54.360937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:05.996 [2024-11-26 13:31:54.360942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:05.996 [2024-11-26 13:31:54.360950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:05.996 [2024-11-26 13:31:54.360955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:05.996 [2024-11-26 13:31:54.360964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:05.996 [2024-11-26 13:31:54.360970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:05.996 [2024-11-26 13:31:54.360979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:05.996 [2024-11-26 13:31:54.360986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:05.997 [2024-11-26 13:31:54.360995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:05.997 [2024-11-26 13:31:54.361001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:05.997 [2024-11-26 13:31:54.361008] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:05.997 [2024-11-26 13:31:54.361014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:05.997 [2024-11-26 13:31:54.361025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:05.997 [2024-11-26 13:31:54.361031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:05.997 [2024-11-26 13:31:54.361038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:05.997 [2024-11-26 13:31:54.361043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:05.997 [2024-11-26 13:31:54.361051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.997 [2024-11-26 13:31:54.361057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:05.997 [2024-11-26 13:31:54.361065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:19:05.997 [2024-11-26 13:31:54.361071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.997 [2024-11-26 13:31:54.361102] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:05.997 [2024-11-26 13:31:54.361109] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:10.208 [2024-11-26 13:31:57.899168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:57.899257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:10.208 [2024-11-26 13:31:57.899279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3538.040 ms 00:19:10.208 [2024-11-26 13:31:57.899289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:57.930922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:57.930982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.208 [2024-11-26 13:31:57.930998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.365 ms 00:19:10.208 [2024-11-26 13:31:57.931007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:57.931167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:57.931178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:10.208 [2024-11-26 13:31:57.931193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:10.208 [2024-11-26 13:31:57.931201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:57.976991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:57.977261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.208 [2024-11-26 13:31:57.977290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.753 ms 00:19:10.208 [2024-11-26 13:31:57.977300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:57.977352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:57.977362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:10.208 [2024-11-26 13:31:57.977374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:10.208 [2024-11-26 13:31:57.977385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:57.977964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:57.977989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:10.208 [2024-11-26 13:31:57.978001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:19:10.208 [2024-11-26 13:31:57.978009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:57.978133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:57.978143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:10.208 [2024-11-26 13:31:57.978156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:10.208 [2024-11-26 13:31:57.978164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:57.993874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:57.993917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:10.208 [2024-11-26 13:31:57.993931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.688 ms 00:19:10.208 [2024-11-26 13:31:57.993948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.007101] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:10.208 [2024-11-26 13:31:58.014726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.014907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:10.208 [2024-11-26 13:31:58.014925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.699 ms 00:19:10.208 [2024-11-26 13:31:58.014936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.108186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.108256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:10.208 [2024-11-26 13:31:58.108272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.219 ms 00:19:10.208 [2024-11-26 13:31:58.108284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.108504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.108523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:10.208 [2024-11-26 13:31:58.108533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:19:10.208 [2024-11-26 13:31:58.108547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.135065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.135121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:10.208 [2024-11-26 13:31:58.135135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.469 ms 00:19:10.208 [2024-11-26 13:31:58.135146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.159471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.159520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:10.208 [2024-11-26 13:31:58.159533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.278 ms 00:19:10.208 [2024-11-26 13:31:58.159543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.160154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.160181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:10.208 [2024-11-26 13:31:58.160191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:19:10.208 [2024-11-26 13:31:58.160201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.243401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.243475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:10.208 [2024-11-26 13:31:58.243490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.162 ms 00:19:10.208 [2024-11-26 13:31:58.243502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.271322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.271378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:10.208 [2024-11-26 13:31:58.271396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.735 ms 00:19:10.208 [2024-11-26 13:31:58.271406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.296993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.297045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:10.208 [2024-11-26 13:31:58.297057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.525 ms 00:19:10.208 [2024-11-26 13:31:58.297067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.322950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.323003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:10.208 [2024-11-26 13:31:58.323016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.840 ms 00:19:10.208 [2024-11-26 13:31:58.323027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.323076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.323091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:10.208 [2024-11-26 13:31:58.323101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:10.208 [2024-11-26 13:31:58.323111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.323200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.208 [2024-11-26 13:31:58.323212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:10.208 [2024-11-26 13:31:58.323222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:10.208 [2024-11-26 13:31:58.323232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.208 [2024-11-26 13:31:58.324411] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3979.360 ms, result 0 00:19:10.208 { 00:19:10.208 "name": "ftl0", 00:19:10.208 "uuid": "78f5071e-1aec-42c4-b06b-8babeb511e18" 00:19:10.208 } 00:19:10.208 13:31:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:10.208 13:31:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:10.208 13:31:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:10.208 13:31:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:10.208 [2024-11-26 13:31:58.652544] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:10.208 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:10.208 Zero copy mechanism will not be used. 00:19:10.208 Running I/O for 4 seconds... 00:19:12.104 643.00 IOPS, 42.70 MiB/s [2024-11-26T13:32:02.059Z] 1204.50 IOPS, 79.99 MiB/s [2024-11-26T13:32:03.005Z] 1328.33 IOPS, 88.21 MiB/s [2024-11-26T13:32:03.005Z] 1229.25 IOPS, 81.63 MiB/s 00:19:14.435 Latency(us) 00:19:14.435 [2024-11-26T13:32:03.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.435 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:14.435 ftl0 : 4.00 1228.89 81.61 0.00 0.00 861.75 152.81 2986.93 00:19:14.435 [2024-11-26T13:32:03.005Z] =================================================================================================================== 00:19:14.435 [2024-11-26T13:32:03.005Z] Total : 1228.89 81.61 0.00 0.00 861.75 152.81 2986.93 00:19:14.435 { 00:19:14.435 "results": [ 00:19:14.435 { 00:19:14.435 "job": "ftl0", 00:19:14.435 "core_mask": "0x1", 00:19:14.435 "workload": "randwrite", 00:19:14.435 "status": "finished", 00:19:14.435 "queue_depth": 1, 00:19:14.435 "io_size": 69632, 00:19:14.435 "runtime": 4.001975, 00:19:14.435 "iops": 1228.8932339657295, 00:19:14.435 "mibps": 81.60619131803672, 00:19:14.435 "io_failed": 0, 00:19:14.435 "io_timeout": 0, 00:19:14.435 "avg_latency_us": 861.75003972847, 00:19:14.435 "min_latency_us": 152.81230769230768, 00:19:14.435 "max_latency_us": 2986.929230769231 00:19:14.435 } 00:19:14.435 ], 00:19:14.435 "core_count": 1 00:19:14.435 } 00:19:14.435 [2024-11-26 13:32:02.663768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:14.435 13:32:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:14.435 [2024-11-26 13:32:02.768439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:14.435 Running I/O for 4 seconds... 00:19:16.325 6172.00 IOPS, 24.11 MiB/s [2024-11-26T13:32:05.838Z] 5785.00 IOPS, 22.60 MiB/s [2024-11-26T13:32:06.781Z] 5613.67 IOPS, 21.93 MiB/s [2024-11-26T13:32:07.043Z] 5551.50 IOPS, 21.69 MiB/s 00:19:18.473 Latency(us) 00:19:18.473 [2024-11-26T13:32:07.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.473 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:18.473 ftl0 : 4.03 5538.19 21.63 0.00 0.00 23022.90 321.38 49807.36 00:19:18.473 [2024-11-26T13:32:07.043Z] =================================================================================================================== 00:19:18.473 [2024-11-26T13:32:07.043Z] Total : 5538.19 21.63 0.00 0.00 23022.90 0.00 49807.36 00:19:18.473 { 00:19:18.473 "results": [ 00:19:18.473 { 00:19:18.473 "job": "ftl0", 00:19:18.473 "core_mask": "0x1", 00:19:18.473 "workload": "randwrite", 00:19:18.473 "status": "finished", 00:19:18.473 "queue_depth": 128, 00:19:18.473 "io_size": 4096, 00:19:18.473 "runtime": 4.032182, 00:19:18.473 "iops": 5538.192472462801, 00:19:18.473 "mibps": 21.633564345557815, 00:19:18.473 "io_failed": 0, 00:19:18.473 "io_timeout": 0, 00:19:18.473 "avg_latency_us": 23022.902516060807, 00:19:18.473 "min_latency_us": 321.3784615384615, 00:19:18.473 "max_latency_us": 49807.36 00:19:18.473 } 00:19:18.473 ], 00:19:18.473 "core_count": 1 00:19:18.473 } 00:19:18.473 [2024-11-26 13:32:06.811266] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:18.473 13:32:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:18.473 [2024-11-26 13:32:06.923552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:18.473 Running I/O for 4 seconds... 00:19:20.795 5023.00 IOPS, 19.62 MiB/s [2024-11-26T13:32:10.307Z] 5907.00 IOPS, 23.07 MiB/s [2024-11-26T13:32:11.252Z] 5910.67 IOPS, 23.09 MiB/s [2024-11-26T13:32:11.253Z] 5669.25 IOPS, 22.15 MiB/s 00:19:22.683 Latency(us) 00:19:22.683 [2024-11-26T13:32:11.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.683 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:22.683 Verification LBA range: start 0x0 length 0x1400000 00:19:22.683 ftl0 : 4.01 5689.42 22.22 0.00 0.00 22442.66 272.54 79046.50 00:19:22.683 [2024-11-26T13:32:11.253Z] =================================================================================================================== 00:19:22.683 [2024-11-26T13:32:11.253Z] Total : 5689.42 22.22 0.00 0.00 22442.66 0.00 79046.50 00:19:22.683 [2024-11-26 13:32:10.945532] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:22.683 { 00:19:22.683 "results": [ 00:19:22.683 { 00:19:22.683 "job": "ftl0", 00:19:22.683 "core_mask": "0x1", 00:19:22.683 "workload": "verify", 00:19:22.683 "status": "finished", 00:19:22.683 "verify_range": { 00:19:22.683 "start": 0, 00:19:22.683 "length": 20971520 00:19:22.683 }, 00:19:22.683 "queue_depth": 128, 00:19:22.683 "io_size": 4096, 00:19:22.683 "runtime": 4.00779, 00:19:22.683 "iops": 5689.419854832713, 00:19:22.683 "mibps": 22.224296307940286, 00:19:22.683 "io_failed": 0, 00:19:22.683 "io_timeout": 0, 00:19:22.683 "avg_latency_us": 22442.66323520879, 00:19:22.683 "min_latency_us": 272.54153846153844, 00:19:22.683 "max_latency_us": 79046.49846153846 00:19:22.683 } 00:19:22.683 ], 00:19:22.683 "core_count": 1 00:19:22.683 } 00:19:22.683 13:32:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:22.683 [2024-11-26 13:32:11.161568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.683 [2024-11-26 13:32:11.161605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.683 [2024-11-26 13:32:11.161614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.683 [2024-11-26 13:32:11.161622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.683 [2024-11-26 13:32:11.161637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.683 [2024-11-26 13:32:11.163699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.683 [2024-11-26 13:32:11.163723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.683 [2024-11-26 13:32:11.163733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:19:22.683 [2024-11-26 13:32:11.163739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.683 [2024-11-26 13:32:11.165507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.683 [2024-11-26 13:32:11.165532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.683 [2024-11-26 13:32:11.165542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:19:22.683 [2024-11-26 13:32:11.165553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.945 [2024-11-26 13:32:11.305961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.945 [2024-11-26 13:32:11.305990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.945 [2024-11-26 13:32:11.306005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 140.392 ms 00:19:22.945 [2024-11-26 13:32:11.306011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.945 [2024-11-26 13:32:11.310901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.945 [2024-11-26 13:32:11.310924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:22.945 [2024-11-26 13:32:11.310933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.863 ms 00:19:22.945 [2024-11-26 13:32:11.310940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.945 [2024-11-26 13:32:11.328683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.945 [2024-11-26 13:32:11.328800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.945 [2024-11-26 13:32:11.328817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.696 ms 00:19:22.945 [2024-11-26 13:32:11.328823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.945 [2024-11-26 13:32:11.340919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.945 [2024-11-26 13:32:11.340949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.945 [2024-11-26 13:32:11.340960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.071 ms 00:19:22.945 [2024-11-26 13:32:11.340966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.945 [2024-11-26 13:32:11.341064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.945 [2024-11-26 13:32:11.341072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.945 [2024-11-26 13:32:11.341082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:22.945 [2024-11-26 13:32:11.341088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.945 [2024-11-26 13:32:11.358943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.945 [2024-11-26 13:32:11.358968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:22.945 [2024-11-26 13:32:11.358977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.842 ms 00:19:22.945 [2024-11-26 13:32:11.358983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.945 [2024-11-26 13:32:11.376158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.945 [2024-11-26 13:32:11.376261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:22.945 [2024-11-26 13:32:11.376276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.149 ms 00:19:22.945 [2024-11-26 13:32:11.376281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.945 [2024-11-26 13:32:11.393167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.945 [2024-11-26 13:32:11.393191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.945 [2024-11-26 13:32:11.393200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.861 ms 00:19:22.945 [2024-11-26 13:32:11.393206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.945 [2024-11-26 13:32:11.410174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.945 [2024-11-26 13:32:11.410198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.945 [2024-11-26 13:32:11.410209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.918 ms 00:19:22.945 [2024-11-26 13:32:11.410214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.945 [2024-11-26 13:32:11.410240] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.945 [2024-11-26 13:32:11.410251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.945 [2024-11-26 13:32:11.410326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.946 [2024-11-26 13:32:11.410960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.947 [2024-11-26 13:32:11.410967] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78f5071e-1aec-42c4-b06b-8babeb511e18 00:19:22.947 [2024-11-26 13:32:11.410973] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.947 [2024-11-26 13:32:11.410981] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.947 [2024-11-26 13:32:11.410986] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.947 [2024-11-26 13:32:11.410993] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.947 [2024-11-26 13:32:11.410998] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.947 [2024-11-26 13:32:11.411005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.947 [2024-11-26 13:32:11.411010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.947 [2024-11-26 13:32:11.411017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.947 [2024-11-26 13:32:11.411022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.947 [2024-11-26 13:32:11.411029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.947 [2024-11-26 13:32:11.411034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.947 [2024-11-26 13:32:11.411042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:19:22.947 [2024-11-26 13:32:11.411047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.947 [2024-11-26 13:32:11.420514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.947 [2024-11-26 13:32:11.420539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.947 [2024-11-26 13:32:11.420547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.443 ms 00:19:22.947 [2024-11-26 13:32:11.420553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.947 [2024-11-26 13:32:11.420815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.947 [2024-11-26 13:32:11.420827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.947 [2024-11-26 13:32:11.420835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:22.947 [2024-11-26 13:32:11.420840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.947 [2024-11-26 13:32:11.448467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.947 [2024-11-26 13:32:11.448493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.947 [2024-11-26 13:32:11.448504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.947 [2024-11-26 13:32:11.448510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.947 [2024-11-26 13:32:11.448552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.947 [2024-11-26 13:32:11.448559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.947 [2024-11-26 13:32:11.448566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.947 [2024-11-26 13:32:11.448572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.947 [2024-11-26 13:32:11.448624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.947 [2024-11-26 13:32:11.448632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.947 [2024-11-26 13:32:11.448640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.947 [2024-11-26 13:32:11.448645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.947 [2024-11-26 13:32:11.448657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.947 [2024-11-26 13:32:11.448663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.947 [2024-11-26 13:32:11.448670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.947 [2024-11-26 13:32:11.448676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.947 [2024-11-26 13:32:11.508757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.947 [2024-11-26 13:32:11.508783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.947 [2024-11-26 13:32:11.508794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.947 [2024-11-26 13:32:11.508800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.209 [2024-11-26 13:32:11.557040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.209 [2024-11-26 13:32:11.557069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:23.209 [2024-11-26 13:32:11.557078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.209 [2024-11-26 13:32:11.557084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.209 [2024-11-26 13:32:11.557151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.209 [2024-11-26 13:32:11.557161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:23.209 [2024-11-26 13:32:11.557168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.209 [2024-11-26 13:32:11.557175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.209 [2024-11-26 13:32:11.557207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.209 [2024-11-26 13:32:11.557214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:23.209 [2024-11-26 13:32:11.557221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.209 [2024-11-26 13:32:11.557227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.209 [2024-11-26 13:32:11.557293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.209 [2024-11-26 13:32:11.557301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:23.209 [2024-11-26 13:32:11.557311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.209 [2024-11-26 13:32:11.557317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.209 [2024-11-26 13:32:11.557341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.209 [2024-11-26 13:32:11.557348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:23.209 [2024-11-26 13:32:11.557355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.209 [2024-11-26 13:32:11.557361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.209 [2024-11-26 13:32:11.557387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.209 [2024-11-26 13:32:11.557393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:23.209 [2024-11-26 13:32:11.557403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.209 [2024-11-26 13:32:11.557414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.209 [2024-11-26 13:32:11.557468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:23.209 [2024-11-26 13:32:11.557476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:23.209 [2024-11-26 13:32:11.557484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:23.209 [2024-11-26 13:32:11.557489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.209 [2024-11-26 13:32:11.557589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 395.985 ms, result 0 00:19:23.209 true 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76002 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76002 ']' 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76002 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76002 00:19:23.209 killing process with pid 76002 00:19:23.209 Received shutdown signal, test time was about 4.000000 seconds 00:19:23.209 00:19:23.209 Latency(us) 00:19:23.209 [2024-11-26T13:32:11.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.209 [2024-11-26T13:32:11.779Z] =================================================================================================================== 00:19:23.209 [2024-11-26T13:32:11.779Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76002' 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76002 00:19:23.209 13:32:11 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76002 00:19:23.781 13:32:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:23.781 13:32:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:23.781 Remove shared memory files 00:19:23.781 13:32:12 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:23.781 13:32:12 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:23.781 13:32:12 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:23.781 13:32:12 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:23.781 13:32:12 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:23.781 13:32:12 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:23.781 ************************************ 00:19:23.781 END TEST ftl_bdevperf 00:19:23.781 ************************************ 00:19:23.781 00:19:23.781 real 0m21.787s 00:19:23.781 user 0m24.360s 00:19:23.781 sys 0m0.946s 00:19:23.781 13:32:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.781 13:32:12 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:23.781 13:32:12 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:23.781 13:32:12 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:23.781 13:32:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.781 13:32:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:23.781 ************************************ 00:19:23.781 START TEST ftl_trim 00:19:23.781 ************************************ 00:19:23.781 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:23.781 * Looking for test storage... 00:19:23.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:23.781 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:23.781 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:23.781 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:24.043 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.043 13:32:12 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:24.043 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.043 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:24.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.043 --rc genhtml_branch_coverage=1 00:19:24.043 --rc genhtml_function_coverage=1 00:19:24.043 --rc genhtml_legend=1 00:19:24.043 --rc geninfo_all_blocks=1 00:19:24.043 --rc geninfo_unexecuted_blocks=1 00:19:24.043 00:19:24.043 ' 00:19:24.043 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:24.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.043 --rc genhtml_branch_coverage=1 00:19:24.043 --rc genhtml_function_coverage=1 00:19:24.043 --rc genhtml_legend=1 00:19:24.043 --rc geninfo_all_blocks=1 00:19:24.043 --rc geninfo_unexecuted_blocks=1 00:19:24.043 00:19:24.043 ' 00:19:24.043 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:24.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.043 --rc genhtml_branch_coverage=1 00:19:24.043 --rc genhtml_function_coverage=1 00:19:24.043 --rc genhtml_legend=1 00:19:24.043 --rc geninfo_all_blocks=1 00:19:24.043 --rc geninfo_unexecuted_blocks=1 00:19:24.043 00:19:24.043 ' 00:19:24.043 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:24.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.043 --rc genhtml_branch_coverage=1 00:19:24.043 --rc genhtml_function_coverage=1 00:19:24.043 --rc genhtml_legend=1 00:19:24.043 --rc geninfo_all_blocks=1 00:19:24.043 --rc geninfo_unexecuted_blocks=1 00:19:24.043 00:19:24.043 ' 00:19:24.043 13:32:12 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:24.043 13:32:12 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:24.043 13:32:12 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:24.043 13:32:12 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76347 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:24.044 13:32:12 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76347 00:19:24.044 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76347 ']' 00:19:24.044 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.044 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.044 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.044 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.044 13:32:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:24.044 [2024-11-26 13:32:12.467681] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:19:24.044 [2024-11-26 13:32:12.467904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76347 ] 00:19:24.305 [2024-11-26 13:32:12.623635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:24.305 [2024-11-26 13:32:12.702986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.305 [2024-11-26 13:32:12.703272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.305 [2024-11-26 13:32:12.703301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.877 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.877 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:24.877 13:32:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:24.877 13:32:13 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:24.878 13:32:13 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:24.878 13:32:13 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:24.878 13:32:13 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:24.878 13:32:13 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:25.139 13:32:13 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:25.139 13:32:13 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:25.139 13:32:13 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:25.139 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:25.139 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:25.139 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:25.139 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:25.139 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:25.400 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:25.400 { 00:19:25.400 "name": "nvme0n1", 00:19:25.400 "aliases": [ 00:19:25.400 "0ac8e02a-5bc3-48a6-a8f8-4b5f94886732" 00:19:25.400 ], 00:19:25.400 "product_name": "NVMe disk", 00:19:25.400 "block_size": 4096, 00:19:25.400 "num_blocks": 1310720, 00:19:25.400 "uuid": "0ac8e02a-5bc3-48a6-a8f8-4b5f94886732", 00:19:25.400 "numa_id": -1, 00:19:25.400 "assigned_rate_limits": { 00:19:25.400 "rw_ios_per_sec": 0, 00:19:25.400 "rw_mbytes_per_sec": 0, 00:19:25.400 "r_mbytes_per_sec": 0, 00:19:25.400 "w_mbytes_per_sec": 0 00:19:25.400 }, 00:19:25.400 "claimed": true, 00:19:25.400 "claim_type": "read_many_write_one", 00:19:25.400 "zoned": false, 00:19:25.400 "supported_io_types": { 00:19:25.400 "read": true, 00:19:25.400 "write": true, 00:19:25.400 "unmap": true, 00:19:25.400 "flush": true, 00:19:25.400 "reset": true, 00:19:25.400 "nvme_admin": true, 00:19:25.400 "nvme_io": true, 00:19:25.400 "nvme_io_md": false, 00:19:25.400 "write_zeroes": true, 00:19:25.400 "zcopy": false, 00:19:25.400 "get_zone_info": false, 00:19:25.400 "zone_management": false, 00:19:25.400 "zone_append": false, 00:19:25.400 "compare": true, 00:19:25.400 "compare_and_write": false, 00:19:25.400 "abort": true, 00:19:25.400 "seek_hole": false, 00:19:25.400 "seek_data": false, 00:19:25.400 "copy": true, 00:19:25.400 "nvme_iov_md": false 00:19:25.400 }, 00:19:25.400 "driver_specific": { 00:19:25.400 "nvme": [ 00:19:25.400 { 00:19:25.400 "pci_address": "0000:00:11.0", 00:19:25.400 "trid": { 00:19:25.400 "trtype": "PCIe", 00:19:25.400 "traddr": "0000:00:11.0" 00:19:25.400 }, 00:19:25.400 "ctrlr_data": { 00:19:25.400 "cntlid": 0, 00:19:25.400 "vendor_id": "0x1b36", 00:19:25.400 "model_number": "QEMU NVMe Ctrl", 00:19:25.400 "serial_number": "12341", 00:19:25.400 "firmware_revision": "8.0.0", 00:19:25.400 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:25.400 "oacs": { 00:19:25.400 "security": 0, 00:19:25.400 "format": 1, 00:19:25.400 "firmware": 0, 00:19:25.400 "ns_manage": 1 00:19:25.400 }, 00:19:25.400 "multi_ctrlr": false, 00:19:25.400 "ana_reporting": false 00:19:25.400 }, 00:19:25.400 "vs": { 00:19:25.400 "nvme_version": "1.4" 00:19:25.400 }, 00:19:25.400 "ns_data": { 00:19:25.400 "id": 1, 00:19:25.400 "can_share": false 00:19:25.400 } 00:19:25.400 } 00:19:25.400 ], 00:19:25.400 "mp_policy": "active_passive" 00:19:25.400 } 00:19:25.400 } 00:19:25.400 ]' 00:19:25.400 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:25.400 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:25.400 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:25.400 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:25.400 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:25.400 13:32:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:25.400 13:32:13 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:25.400 13:32:13 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:25.400 13:32:13 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:25.400 13:32:13 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:25.400 13:32:13 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:25.662 13:32:14 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=defc29d6-eaff-4282-994c-8c169f041477 00:19:25.662 13:32:14 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:25.662 13:32:14 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u defc29d6-eaff-4282-994c-8c169f041477 00:19:25.923 13:32:14 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:25.923 13:32:14 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=48ccfd19-e773-41e9-bf82-258e9264f390 00:19:25.923 13:32:14 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 48ccfd19-e773-41e9-bf82-258e9264f390 00:19:26.184 13:32:14 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:26.184 13:32:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:26.184 13:32:14 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:26.184 13:32:14 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:26.184 13:32:14 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:26.184 13:32:14 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:26.184 13:32:14 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:26.184 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:26.184 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:26.184 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:26.184 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:26.184 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:26.446 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:26.446 { 00:19:26.446 "name": "c32f5ae3-e700-481e-a78d-c2fcee3635f1", 00:19:26.446 "aliases": [ 00:19:26.446 "lvs/nvme0n1p0" 00:19:26.446 ], 00:19:26.446 "product_name": "Logical Volume", 00:19:26.446 "block_size": 4096, 00:19:26.446 "num_blocks": 26476544, 00:19:26.446 "uuid": "c32f5ae3-e700-481e-a78d-c2fcee3635f1", 00:19:26.446 "assigned_rate_limits": { 00:19:26.446 "rw_ios_per_sec": 0, 00:19:26.446 "rw_mbytes_per_sec": 0, 00:19:26.446 "r_mbytes_per_sec": 0, 00:19:26.446 "w_mbytes_per_sec": 0 00:19:26.446 }, 00:19:26.446 "claimed": false, 00:19:26.446 "zoned": false, 00:19:26.446 "supported_io_types": { 00:19:26.446 "read": true, 00:19:26.446 "write": true, 00:19:26.446 "unmap": true, 00:19:26.446 "flush": false, 00:19:26.446 "reset": true, 00:19:26.446 "nvme_admin": false, 00:19:26.446 "nvme_io": false, 00:19:26.446 "nvme_io_md": false, 00:19:26.446 "write_zeroes": true, 00:19:26.446 "zcopy": false, 00:19:26.446 "get_zone_info": false, 00:19:26.446 "zone_management": false, 00:19:26.446 "zone_append": false, 00:19:26.446 "compare": false, 00:19:26.446 "compare_and_write": false, 00:19:26.446 "abort": false, 00:19:26.446 "seek_hole": true, 00:19:26.446 "seek_data": true, 00:19:26.446 "copy": false, 00:19:26.446 "nvme_iov_md": false 00:19:26.446 }, 00:19:26.446 "driver_specific": { 00:19:26.446 "lvol": { 00:19:26.446 "lvol_store_uuid": "48ccfd19-e773-41e9-bf82-258e9264f390", 00:19:26.446 "base_bdev": "nvme0n1", 00:19:26.446 "thin_provision": true, 00:19:26.446 "num_allocated_clusters": 0, 00:19:26.446 "snapshot": false, 00:19:26.446 "clone": false, 00:19:26.446 "esnap_clone": false 00:19:26.446 } 00:19:26.446 } 00:19:26.446 } 00:19:26.446 ]' 00:19:26.446 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:26.446 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:26.446 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:26.446 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:26.446 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:26.446 13:32:14 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:26.446 13:32:14 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:26.446 13:32:14 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:26.446 13:32:14 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:26.708 13:32:15 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:26.708 13:32:15 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:26.708 13:32:15 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:26.708 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:26.708 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:26.708 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:26.708 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:26.708 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:26.969 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:26.969 { 00:19:26.969 "name": "c32f5ae3-e700-481e-a78d-c2fcee3635f1", 00:19:26.969 "aliases": [ 00:19:26.969 "lvs/nvme0n1p0" 00:19:26.969 ], 00:19:26.969 "product_name": "Logical Volume", 00:19:26.969 "block_size": 4096, 00:19:26.969 "num_blocks": 26476544, 00:19:26.969 "uuid": "c32f5ae3-e700-481e-a78d-c2fcee3635f1", 00:19:26.970 "assigned_rate_limits": { 00:19:26.970 "rw_ios_per_sec": 0, 00:19:26.970 "rw_mbytes_per_sec": 0, 00:19:26.970 "r_mbytes_per_sec": 0, 00:19:26.970 "w_mbytes_per_sec": 0 00:19:26.970 }, 00:19:26.970 "claimed": false, 00:19:26.970 "zoned": false, 00:19:26.970 "supported_io_types": { 00:19:26.970 "read": true, 00:19:26.970 "write": true, 00:19:26.970 "unmap": true, 00:19:26.970 "flush": false, 00:19:26.970 "reset": true, 00:19:26.970 "nvme_admin": false, 00:19:26.970 "nvme_io": false, 00:19:26.970 "nvme_io_md": false, 00:19:26.970 "write_zeroes": true, 00:19:26.970 "zcopy": false, 00:19:26.970 "get_zone_info": false, 00:19:26.970 "zone_management": false, 00:19:26.970 "zone_append": false, 00:19:26.970 "compare": false, 00:19:26.970 "compare_and_write": false, 00:19:26.970 "abort": false, 00:19:26.970 "seek_hole": true, 00:19:26.970 "seek_data": true, 00:19:26.970 "copy": false, 00:19:26.970 "nvme_iov_md": false 00:19:26.970 }, 00:19:26.970 "driver_specific": { 00:19:26.970 "lvol": { 00:19:26.970 "lvol_store_uuid": "48ccfd19-e773-41e9-bf82-258e9264f390", 00:19:26.970 "base_bdev": "nvme0n1", 00:19:26.970 "thin_provision": true, 00:19:26.970 "num_allocated_clusters": 0, 00:19:26.970 "snapshot": false, 00:19:26.970 "clone": false, 00:19:26.970 "esnap_clone": false 00:19:26.970 } 00:19:26.970 } 00:19:26.970 } 00:19:26.970 ]' 00:19:26.970 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:26.970 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:26.970 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:26.970 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:26.970 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:26.970 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:26.970 13:32:15 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:26.970 13:32:15 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:27.230 13:32:15 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:27.230 13:32:15 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:27.230 13:32:15 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:27.230 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:27.230 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:27.230 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:27.230 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:27.230 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c32f5ae3-e700-481e-a78d-c2fcee3635f1 00:19:27.491 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:27.491 { 00:19:27.491 "name": "c32f5ae3-e700-481e-a78d-c2fcee3635f1", 00:19:27.491 "aliases": [ 00:19:27.491 "lvs/nvme0n1p0" 00:19:27.491 ], 00:19:27.491 "product_name": "Logical Volume", 00:19:27.491 "block_size": 4096, 00:19:27.491 "num_blocks": 26476544, 00:19:27.491 "uuid": "c32f5ae3-e700-481e-a78d-c2fcee3635f1", 00:19:27.491 "assigned_rate_limits": { 00:19:27.491 "rw_ios_per_sec": 0, 00:19:27.491 "rw_mbytes_per_sec": 0, 00:19:27.491 "r_mbytes_per_sec": 0, 00:19:27.491 "w_mbytes_per_sec": 0 00:19:27.491 }, 00:19:27.491 "claimed": false, 00:19:27.491 "zoned": false, 00:19:27.491 "supported_io_types": { 00:19:27.491 "read": true, 00:19:27.491 "write": true, 00:19:27.491 "unmap": true, 00:19:27.491 "flush": false, 00:19:27.491 "reset": true, 00:19:27.491 "nvme_admin": false, 00:19:27.491 "nvme_io": false, 00:19:27.491 "nvme_io_md": false, 00:19:27.491 "write_zeroes": true, 00:19:27.491 "zcopy": false, 00:19:27.491 "get_zone_info": false, 00:19:27.491 "zone_management": false, 00:19:27.491 "zone_append": false, 00:19:27.491 "compare": false, 00:19:27.491 "compare_and_write": false, 00:19:27.491 "abort": false, 00:19:27.491 "seek_hole": true, 00:19:27.491 "seek_data": true, 00:19:27.491 "copy": false, 00:19:27.491 "nvme_iov_md": false 00:19:27.491 }, 00:19:27.491 "driver_specific": { 00:19:27.491 "lvol": { 00:19:27.491 "lvol_store_uuid": "48ccfd19-e773-41e9-bf82-258e9264f390", 00:19:27.491 "base_bdev": "nvme0n1", 00:19:27.491 "thin_provision": true, 00:19:27.491 "num_allocated_clusters": 0, 00:19:27.491 "snapshot": false, 00:19:27.491 "clone": false, 00:19:27.491 "esnap_clone": false 00:19:27.491 } 00:19:27.491 } 00:19:27.491 } 00:19:27.491 ]' 00:19:27.491 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:27.491 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:27.491 13:32:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:27.491 13:32:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:27.491 13:32:16 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:27.491 13:32:16 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:27.491 13:32:16 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:27.491 13:32:16 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c32f5ae3-e700-481e-a78d-c2fcee3635f1 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:27.751 [2024-11-26 13:32:16.196981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.197018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:27.751 [2024-11-26 13:32:16.197032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:27.751 [2024-11-26 13:32:16.197040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.199866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.199897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.751 [2024-11-26 13:32:16.199908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.789 ms 00:19:27.751 [2024-11-26 13:32:16.199916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.200027] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:27.751 [2024-11-26 13:32:16.200703] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:27.751 [2024-11-26 13:32:16.200728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.200736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.751 [2024-11-26 13:32:16.200747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:19:27.751 [2024-11-26 13:32:16.200755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.200922] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d09fa75c-4f30-4107-83fe-472028788725 00:19:27.751 [2024-11-26 13:32:16.202018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.202046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:27.751 [2024-11-26 13:32:16.202056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:27.751 [2024-11-26 13:32:16.202065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.207598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.207624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.751 [2024-11-26 13:32:16.207633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.448 ms 00:19:27.751 [2024-11-26 13:32:16.207644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.207764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.207776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.751 [2024-11-26 13:32:16.207784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:27.751 [2024-11-26 13:32:16.207796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.207837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.207847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:27.751 [2024-11-26 13:32:16.207855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:27.751 [2024-11-26 13:32:16.207866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.207905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:27.751 [2024-11-26 13:32:16.211522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.211547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.751 [2024-11-26 13:32:16.211560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.621 ms 00:19:27.751 [2024-11-26 13:32:16.211568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.211622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.211644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:27.751 [2024-11-26 13:32:16.211654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:27.751 [2024-11-26 13:32:16.211662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.211711] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:27.751 [2024-11-26 13:32:16.211846] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:27.751 [2024-11-26 13:32:16.211861] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:27.751 [2024-11-26 13:32:16.211872] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:27.751 [2024-11-26 13:32:16.211884] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:27.751 [2024-11-26 13:32:16.211894] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:27.751 [2024-11-26 13:32:16.211904] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:27.751 [2024-11-26 13:32:16.211911] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:27.751 [2024-11-26 13:32:16.211921] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:27.751 [2024-11-26 13:32:16.211930] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:27.751 [2024-11-26 13:32:16.211939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.211946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:27.751 [2024-11-26 13:32:16.211955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:19:27.751 [2024-11-26 13:32:16.211963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.212066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.751 [2024-11-26 13:32:16.212075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:27.751 [2024-11-26 13:32:16.212084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:27.751 [2024-11-26 13:32:16.212090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.751 [2024-11-26 13:32:16.212210] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:27.751 [2024-11-26 13:32:16.212219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:27.751 [2024-11-26 13:32:16.212230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.751 [2024-11-26 13:32:16.212237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.751 [2024-11-26 13:32:16.212246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:27.751 [2024-11-26 13:32:16.212253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:27.751 [2024-11-26 13:32:16.212262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:27.751 [2024-11-26 13:32:16.212269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:27.751 [2024-11-26 13:32:16.212277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:27.751 [2024-11-26 13:32:16.212284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.751 [2024-11-26 13:32:16.212292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:27.751 [2024-11-26 13:32:16.212299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:27.751 [2024-11-26 13:32:16.212307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.751 [2024-11-26 13:32:16.212314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:27.751 [2024-11-26 13:32:16.212322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:27.751 [2024-11-26 13:32:16.212330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.751 [2024-11-26 13:32:16.212339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:27.751 [2024-11-26 13:32:16.212346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:27.751 [2024-11-26 13:32:16.212355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.751 [2024-11-26 13:32:16.212362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:27.751 [2024-11-26 13:32:16.212371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:27.751 [2024-11-26 13:32:16.212377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.751 [2024-11-26 13:32:16.212386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:27.751 [2024-11-26 13:32:16.212392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:27.751 [2024-11-26 13:32:16.212400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.751 [2024-11-26 13:32:16.212407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:27.751 [2024-11-26 13:32:16.212415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:27.751 [2024-11-26 13:32:16.212423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.751 [2024-11-26 13:32:16.212431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:27.752 [2024-11-26 13:32:16.212451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:27.752 [2024-11-26 13:32:16.212461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.752 [2024-11-26 13:32:16.212468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:27.752 [2024-11-26 13:32:16.212478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:27.752 [2024-11-26 13:32:16.212485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.752 [2024-11-26 13:32:16.212494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:27.752 [2024-11-26 13:32:16.212500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:27.752 [2024-11-26 13:32:16.212508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.752 [2024-11-26 13:32:16.212514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:27.752 [2024-11-26 13:32:16.212523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:27.752 [2024-11-26 13:32:16.212530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.752 [2024-11-26 13:32:16.212538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:27.752 [2024-11-26 13:32:16.212544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:27.752 [2024-11-26 13:32:16.212552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.752 [2024-11-26 13:32:16.212559] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:27.752 [2024-11-26 13:32:16.212568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:27.752 [2024-11-26 13:32:16.212575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.752 [2024-11-26 13:32:16.212584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.752 [2024-11-26 13:32:16.212592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:27.752 [2024-11-26 13:32:16.212603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:27.752 [2024-11-26 13:32:16.212610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:27.752 [2024-11-26 13:32:16.212619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:27.752 [2024-11-26 13:32:16.212626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:27.752 [2024-11-26 13:32:16.212634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:27.752 [2024-11-26 13:32:16.212644] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:27.752 [2024-11-26 13:32:16.212655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.752 [2024-11-26 13:32:16.212668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:27.752 [2024-11-26 13:32:16.212676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:27.752 [2024-11-26 13:32:16.212683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:27.752 [2024-11-26 13:32:16.212693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:27.752 [2024-11-26 13:32:16.212700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:27.752 [2024-11-26 13:32:16.212709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:27.752 [2024-11-26 13:32:16.212717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:27.752 [2024-11-26 13:32:16.212726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:27.752 [2024-11-26 13:32:16.212733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:27.752 [2024-11-26 13:32:16.212743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:27.752 [2024-11-26 13:32:16.212750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:27.752 [2024-11-26 13:32:16.212758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:27.752 [2024-11-26 13:32:16.212766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:27.752 [2024-11-26 13:32:16.212776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:27.752 [2024-11-26 13:32:16.212782] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:27.752 [2024-11-26 13:32:16.212793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.752 [2024-11-26 13:32:16.212801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:27.752 [2024-11-26 13:32:16.212809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:27.752 [2024-11-26 13:32:16.212816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:27.752 [2024-11-26 13:32:16.212827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:27.752 [2024-11-26 13:32:16.212835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.752 [2024-11-26 13:32:16.212844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:27.752 [2024-11-26 13:32:16.212851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:19:27.752 [2024-11-26 13:32:16.212860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.752 [2024-11-26 13:32:16.212939] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:27.752 [2024-11-26 13:32:16.212952] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:30.283 [2024-11-26 13:32:18.553885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.283 [2024-11-26 13:32:18.553941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:30.283 [2024-11-26 13:32:18.553956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2340.934 ms 00:19:30.283 [2024-11-26 13:32:18.553966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.283 [2024-11-26 13:32:18.579848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.283 [2024-11-26 13:32:18.579890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.283 [2024-11-26 13:32:18.579902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.624 ms 00:19:30.284 [2024-11-26 13:32:18.579912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.580039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.580051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.284 [2024-11-26 13:32:18.580076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:30.284 [2024-11-26 13:32:18.580090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.621916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.621970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.284 [2024-11-26 13:32:18.621986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.791 ms 00:19:30.284 [2024-11-26 13:32:18.622002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.622100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.622119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.284 [2024-11-26 13:32:18.622131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:30.284 [2024-11-26 13:32:18.622144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.622539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.622566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.284 [2024-11-26 13:32:18.622579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:19:30.284 [2024-11-26 13:32:18.622591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.622757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.622772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.284 [2024-11-26 13:32:18.622799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:19:30.284 [2024-11-26 13:32:18.622815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.638378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.638409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.284 [2024-11-26 13:32:18.638419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.524 ms 00:19:30.284 [2024-11-26 13:32:18.638429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.649877] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:30.284 [2024-11-26 13:32:18.664606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.664636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.284 [2024-11-26 13:32:18.664649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.065 ms 00:19:30.284 [2024-11-26 13:32:18.664657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.725269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.725309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:30.284 [2024-11-26 13:32:18.725324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.529 ms 00:19:30.284 [2024-11-26 13:32:18.725333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.725546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.725564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.284 [2024-11-26 13:32:18.725578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:30.284 [2024-11-26 13:32:18.725587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.748700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.748729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:30.284 [2024-11-26 13:32:18.748741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.080 ms 00:19:30.284 [2024-11-26 13:32:18.748752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.771138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.771175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:30.284 [2024-11-26 13:32:18.771188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.342 ms 00:19:30.284 [2024-11-26 13:32:18.771195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.771799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.771819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.284 [2024-11-26 13:32:18.771830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:19:30.284 [2024-11-26 13:32:18.771838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-11-26 13:32:18.836529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-11-26 13:32:18.836558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:30.284 [2024-11-26 13:32:18.836574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.659 ms 00:19:30.284 [2024-11-26 13:32:18.836582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.543 [2024-11-26 13:32:18.860315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.543 [2024-11-26 13:32:18.860342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:30.543 [2024-11-26 13:32:18.860354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.633 ms 00:19:30.543 [2024-11-26 13:32:18.860363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.543 [2024-11-26 13:32:18.882738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.543 [2024-11-26 13:32:18.882763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:30.543 [2024-11-26 13:32:18.882774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.312 ms 00:19:30.543 [2024-11-26 13:32:18.882782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.543 [2024-11-26 13:32:18.905752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.543 [2024-11-26 13:32:18.905791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.543 [2024-11-26 13:32:18.905803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.896 ms 00:19:30.543 [2024-11-26 13:32:18.905811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.543 [2024-11-26 13:32:18.905876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.543 [2024-11-26 13:32:18.905886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.543 [2024-11-26 13:32:18.905898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:30.543 [2024-11-26 13:32:18.905906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.543 [2024-11-26 13:32:18.905988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.543 [2024-11-26 13:32:18.905998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.543 [2024-11-26 13:32:18.906007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:30.543 [2024-11-26 13:32:18.906014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.543 [2024-11-26 13:32:18.906893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.543 [2024-11-26 13:32:18.909799] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2709.523 ms, result 0 00:19:30.543 [2024-11-26 13:32:18.910675] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.543 { 00:19:30.543 "name": "ftl0", 00:19:30.543 "uuid": "d09fa75c-4f30-4107-83fe-472028788725" 00:19:30.543 } 00:19:30.543 13:32:18 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:30.543 13:32:18 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:30.543 13:32:18 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:30.543 13:32:18 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:30.543 13:32:18 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:30.543 13:32:18 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:30.543 13:32:18 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:30.800 13:32:19 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:30.800 [ 00:19:30.800 { 00:19:30.800 "name": "ftl0", 00:19:30.800 "aliases": [ 00:19:30.800 "d09fa75c-4f30-4107-83fe-472028788725" 00:19:30.800 ], 00:19:30.800 "product_name": "FTL disk", 00:19:30.800 "block_size": 4096, 00:19:30.800 "num_blocks": 23592960, 00:19:30.800 "uuid": "d09fa75c-4f30-4107-83fe-472028788725", 00:19:30.800 "assigned_rate_limits": { 00:19:30.800 "rw_ios_per_sec": 0, 00:19:30.800 "rw_mbytes_per_sec": 0, 00:19:30.800 "r_mbytes_per_sec": 0, 00:19:30.800 "w_mbytes_per_sec": 0 00:19:30.800 }, 00:19:30.800 "claimed": false, 00:19:30.800 "zoned": false, 00:19:30.800 "supported_io_types": { 00:19:30.800 "read": true, 00:19:30.800 "write": true, 00:19:30.800 "unmap": true, 00:19:30.800 "flush": true, 00:19:30.800 "reset": false, 00:19:30.800 "nvme_admin": false, 00:19:30.800 "nvme_io": false, 00:19:30.800 "nvme_io_md": false, 00:19:30.800 "write_zeroes": true, 00:19:30.800 "zcopy": false, 00:19:30.800 "get_zone_info": false, 00:19:30.800 "zone_management": false, 00:19:30.800 "zone_append": false, 00:19:30.800 "compare": false, 00:19:30.800 "compare_and_write": false, 00:19:30.800 "abort": false, 00:19:30.800 "seek_hole": false, 00:19:30.800 "seek_data": false, 00:19:30.800 "copy": false, 00:19:30.800 "nvme_iov_md": false 00:19:30.800 }, 00:19:30.800 "driver_specific": { 00:19:30.800 "ftl": { 00:19:30.800 "base_bdev": "c32f5ae3-e700-481e-a78d-c2fcee3635f1", 00:19:30.800 "cache": "nvc0n1p0" 00:19:30.800 } 00:19:30.800 } 00:19:30.801 } 00:19:30.801 ] 00:19:30.801 13:32:19 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:30.801 13:32:19 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:30.801 13:32:19 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:31.057 13:32:19 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:31.057 13:32:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:31.314 13:32:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:31.314 { 00:19:31.314 "name": "ftl0", 00:19:31.314 "aliases": [ 00:19:31.314 "d09fa75c-4f30-4107-83fe-472028788725" 00:19:31.314 ], 00:19:31.314 "product_name": "FTL disk", 00:19:31.314 "block_size": 4096, 00:19:31.314 "num_blocks": 23592960, 00:19:31.314 "uuid": "d09fa75c-4f30-4107-83fe-472028788725", 00:19:31.314 "assigned_rate_limits": { 00:19:31.314 "rw_ios_per_sec": 0, 00:19:31.314 "rw_mbytes_per_sec": 0, 00:19:31.314 "r_mbytes_per_sec": 0, 00:19:31.314 "w_mbytes_per_sec": 0 00:19:31.314 }, 00:19:31.314 "claimed": false, 00:19:31.314 "zoned": false, 00:19:31.314 "supported_io_types": { 00:19:31.314 "read": true, 00:19:31.314 "write": true, 00:19:31.314 "unmap": true, 00:19:31.314 "flush": true, 00:19:31.314 "reset": false, 00:19:31.314 "nvme_admin": false, 00:19:31.314 "nvme_io": false, 00:19:31.314 "nvme_io_md": false, 00:19:31.314 "write_zeroes": true, 00:19:31.314 "zcopy": false, 00:19:31.314 "get_zone_info": false, 00:19:31.314 "zone_management": false, 00:19:31.314 "zone_append": false, 00:19:31.314 "compare": false, 00:19:31.314 "compare_and_write": false, 00:19:31.314 "abort": false, 00:19:31.314 "seek_hole": false, 00:19:31.314 "seek_data": false, 00:19:31.314 "copy": false, 00:19:31.314 "nvme_iov_md": false 00:19:31.314 }, 00:19:31.314 "driver_specific": { 00:19:31.314 "ftl": { 00:19:31.314 "base_bdev": "c32f5ae3-e700-481e-a78d-c2fcee3635f1", 00:19:31.314 "cache": "nvc0n1p0" 00:19:31.314 } 00:19:31.314 } 00:19:31.314 } 00:19:31.314 ]' 00:19:31.314 13:32:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:31.314 13:32:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:31.314 13:32:19 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:31.571 [2024-11-26 13:32:19.958412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:19.958460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:31.572 [2024-11-26 13:32:19.958473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:31.572 [2024-11-26 13:32:19.958485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:19.958521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:31.572 [2024-11-26 13:32:19.961097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:19.961123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:31.572 [2024-11-26 13:32:19.961137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:19:31.572 [2024-11-26 13:32:19.961146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:19.961759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:19.961778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:31.572 [2024-11-26 13:32:19.961789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:19:31.572 [2024-11-26 13:32:19.961797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:19.965436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:19.965468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:31.572 [2024-11-26 13:32:19.965479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.608 ms 00:19:31.572 [2024-11-26 13:32:19.965487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:19.972505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:19.972529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:31.572 [2024-11-26 13:32:19.972540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.973 ms 00:19:31.572 [2024-11-26 13:32:19.972547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:19.995829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:19.995856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:31.572 [2024-11-26 13:32:19.995871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.201 ms 00:19:31.572 [2024-11-26 13:32:19.995878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:20.010839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:20.010871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:31.572 [2024-11-26 13:32:20.010888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.901 ms 00:19:31.572 [2024-11-26 13:32:20.010895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:20.011121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:20.011132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:31.572 [2024-11-26 13:32:20.011143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:19:31.572 [2024-11-26 13:32:20.011150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:20.033608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:20.033633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:31.572 [2024-11-26 13:32:20.033645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.426 ms 00:19:31.572 [2024-11-26 13:32:20.033653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:20.055644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:20.055669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:31.572 [2024-11-26 13:32:20.055684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.926 ms 00:19:31.572 [2024-11-26 13:32:20.055691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:20.077400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:20.077424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:31.572 [2024-11-26 13:32:20.077436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.652 ms 00:19:31.572 [2024-11-26 13:32:20.077452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:20.099556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.572 [2024-11-26 13:32:20.099581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:31.572 [2024-11-26 13:32:20.099592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.985 ms 00:19:31.572 [2024-11-26 13:32:20.099599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.572 [2024-11-26 13:32:20.099653] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:31.572 [2024-11-26 13:32:20.099667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.099996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.100004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.100013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.100020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.100029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.100036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.100045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:31.572 [2024-11-26 13:32:20.100052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:31.573 [2024-11-26 13:32:20.100545] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:31.573 [2024-11-26 13:32:20.100556] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d09fa75c-4f30-4107-83fe-472028788725 00:19:31.573 [2024-11-26 13:32:20.100563] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:31.573 [2024-11-26 13:32:20.100572] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:31.573 [2024-11-26 13:32:20.100578] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:31.573 [2024-11-26 13:32:20.100590] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:31.573 [2024-11-26 13:32:20.100597] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:31.573 [2024-11-26 13:32:20.100605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:31.573 [2024-11-26 13:32:20.100613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:31.573 [2024-11-26 13:32:20.100620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:31.573 [2024-11-26 13:32:20.100626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:31.573 [2024-11-26 13:32:20.100635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.573 [2024-11-26 13:32:20.100642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:31.573 [2024-11-26 13:32:20.100652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:19:31.573 [2024-11-26 13:32:20.100660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.573 [2024-11-26 13:32:20.113109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.573 [2024-11-26 13:32:20.113135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:31.573 [2024-11-26 13:32:20.113149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.413 ms 00:19:31.573 [2024-11-26 13:32:20.113157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.573 [2024-11-26 13:32:20.113556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.573 [2024-11-26 13:32:20.113575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:31.573 [2024-11-26 13:32:20.113585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:19:31.573 [2024-11-26 13:32:20.113593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.831 [2024-11-26 13:32:20.157458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.831 [2024-11-26 13:32:20.157484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.831 [2024-11-26 13:32:20.157495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.831 [2024-11-26 13:32:20.157503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.157597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.157608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.832 [2024-11-26 13:32:20.157617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.157624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.157687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.157698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.832 [2024-11-26 13:32:20.157709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.157716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.157750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.157759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.832 [2024-11-26 13:32:20.157767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.157774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.238686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.238729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.832 [2024-11-26 13:32:20.238741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.238748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.300943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.300974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.832 [2024-11-26 13:32:20.300986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.300994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.301086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.301096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.832 [2024-11-26 13:32:20.301110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.301118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.301174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.301183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.832 [2024-11-26 13:32:20.301191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.301198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.301303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.301313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.832 [2024-11-26 13:32:20.301323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.301332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.301395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.301405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:31.832 [2024-11-26 13:32:20.301414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.301422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.301499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.301509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.832 [2024-11-26 13:32:20.301520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.301528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.301582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.832 [2024-11-26 13:32:20.301593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.832 [2024-11-26 13:32:20.301602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.832 [2024-11-26 13:32:20.301609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.832 [2024-11-26 13:32:20.301796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.372 ms, result 0 00:19:31.832 true 00:19:31.832 13:32:20 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76347 00:19:31.832 13:32:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76347 ']' 00:19:31.832 13:32:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76347 00:19:31.832 13:32:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:31.832 13:32:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.832 13:32:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76347 00:19:31.832 13:32:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.832 13:32:20 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.832 killing process with pid 76347 00:19:31.832 13:32:20 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76347' 00:19:31.832 13:32:20 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76347 00:19:31.832 13:32:20 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76347 00:19:37.103 13:32:25 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:38.042 65536+0 records in 00:19:38.042 65536+0 records out 00:19:38.042 268435456 bytes (268 MB, 256 MiB) copied, 0.999656 s, 269 MB/s 00:19:38.042 13:32:26 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:38.301 [2024-11-26 13:32:26.622782] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:19:38.301 [2024-11-26 13:32:26.622900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76526 ] 00:19:38.301 [2024-11-26 13:32:26.779155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.301 [2024-11-26 13:32:26.853431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.561 [2024-11-26 13:32:27.057596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.561 [2024-11-26 13:32:27.057642] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.825 [2024-11-26 13:32:27.209480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.209515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.825 [2024-11-26 13:32:27.209526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:38.825 [2024-11-26 13:32:27.209533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.211595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.211621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.825 [2024-11-26 13:32:27.211629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.050 ms 00:19:38.825 [2024-11-26 13:32:27.211634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.211690] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.825 [2024-11-26 13:32:27.212228] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.825 [2024-11-26 13:32:27.212248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.212255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.825 [2024-11-26 13:32:27.212262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:19:38.825 [2024-11-26 13:32:27.212268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.213226] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:38.825 [2024-11-26 13:32:27.222492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.222519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:38.825 [2024-11-26 13:32:27.222527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.267 ms 00:19:38.825 [2024-11-26 13:32:27.222533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.222599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.222608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:38.825 [2024-11-26 13:32:27.222614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:38.825 [2024-11-26 13:32:27.222620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.226894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.226919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.825 [2024-11-26 13:32:27.226926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.245 ms 00:19:38.825 [2024-11-26 13:32:27.226932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.227002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.227010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.825 [2024-11-26 13:32:27.227016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:38.825 [2024-11-26 13:32:27.227022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.227039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.227046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.825 [2024-11-26 13:32:27.227051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:38.825 [2024-11-26 13:32:27.227057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.227075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:38.825 [2024-11-26 13:32:27.229618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.229648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.825 [2024-11-26 13:32:27.229655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:19:38.825 [2024-11-26 13:32:27.229664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.229695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.229705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.825 [2024-11-26 13:32:27.229711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.825 [2024-11-26 13:32:27.229717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.229732] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:38.825 [2024-11-26 13:32:27.229748] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:38.825 [2024-11-26 13:32:27.229775] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:38.825 [2024-11-26 13:32:27.229791] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:38.825 [2024-11-26 13:32:27.229879] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:38.825 [2024-11-26 13:32:27.229888] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.825 [2024-11-26 13:32:27.229896] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:38.825 [2024-11-26 13:32:27.229905] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.825 [2024-11-26 13:32:27.229912] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.825 [2024-11-26 13:32:27.229918] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:38.825 [2024-11-26 13:32:27.229924] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.825 [2024-11-26 13:32:27.229930] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:38.825 [2024-11-26 13:32:27.229936] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:38.825 [2024-11-26 13:32:27.229942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.229947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.825 [2024-11-26 13:32:27.229953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:19:38.825 [2024-11-26 13:32:27.229959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.230025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.825 [2024-11-26 13:32:27.230039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.825 [2024-11-26 13:32:27.230046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:38.825 [2024-11-26 13:32:27.230051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.825 [2024-11-26 13:32:27.230129] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.825 [2024-11-26 13:32:27.230137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.825 [2024-11-26 13:32:27.230144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.825 [2024-11-26 13:32:27.230150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.825 [2024-11-26 13:32:27.230156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.825 [2024-11-26 13:32:27.230163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.825 [2024-11-26 13:32:27.230168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:38.825 [2024-11-26 13:32:27.230174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.825 [2024-11-26 13:32:27.230181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:38.825 [2024-11-26 13:32:27.230186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.825 [2024-11-26 13:32:27.230191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.825 [2024-11-26 13:32:27.230200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:38.825 [2024-11-26 13:32:27.230205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.825 [2024-11-26 13:32:27.230212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.825 [2024-11-26 13:32:27.230218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:38.825 [2024-11-26 13:32:27.230223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.825 [2024-11-26 13:32:27.230229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.825 [2024-11-26 13:32:27.230234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:38.825 [2024-11-26 13:32:27.230239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.825 [2024-11-26 13:32:27.230244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.825 [2024-11-26 13:32:27.230249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:38.825 [2024-11-26 13:32:27.230254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.825 [2024-11-26 13:32:27.230259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.825 [2024-11-26 13:32:27.230265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:38.825 [2024-11-26 13:32:27.230270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.826 [2024-11-26 13:32:27.230275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.826 [2024-11-26 13:32:27.230281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:38.826 [2024-11-26 13:32:27.230285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.826 [2024-11-26 13:32:27.230290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.826 [2024-11-26 13:32:27.230295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:38.826 [2024-11-26 13:32:27.230300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.826 [2024-11-26 13:32:27.230306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.826 [2024-11-26 13:32:27.230310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:38.826 [2024-11-26 13:32:27.230315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.826 [2024-11-26 13:32:27.230320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.826 [2024-11-26 13:32:27.230325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:38.826 [2024-11-26 13:32:27.230330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.826 [2024-11-26 13:32:27.230335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:38.826 [2024-11-26 13:32:27.230341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:38.826 [2024-11-26 13:32:27.230347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.826 [2024-11-26 13:32:27.230352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:38.826 [2024-11-26 13:32:27.230357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:38.826 [2024-11-26 13:32:27.230363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.826 [2024-11-26 13:32:27.230367] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.826 [2024-11-26 13:32:27.230373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.826 [2024-11-26 13:32:27.230380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.826 [2024-11-26 13:32:27.230385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.826 [2024-11-26 13:32:27.230391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.826 [2024-11-26 13:32:27.230397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.826 [2024-11-26 13:32:27.230402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.826 [2024-11-26 13:32:27.230408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.826 [2024-11-26 13:32:27.230413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.826 [2024-11-26 13:32:27.230418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.826 [2024-11-26 13:32:27.230424] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.826 [2024-11-26 13:32:27.230431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.826 [2024-11-26 13:32:27.230437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:38.826 [2024-11-26 13:32:27.230456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:38.826 [2024-11-26 13:32:27.230461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:38.826 [2024-11-26 13:32:27.230467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:38.826 [2024-11-26 13:32:27.230472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:38.826 [2024-11-26 13:32:27.230481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:38.826 [2024-11-26 13:32:27.230488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:38.826 [2024-11-26 13:32:27.230493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:38.826 [2024-11-26 13:32:27.230499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:38.826 [2024-11-26 13:32:27.230505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:38.826 [2024-11-26 13:32:27.230511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:38.826 [2024-11-26 13:32:27.230516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:38.826 [2024-11-26 13:32:27.230522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:38.826 [2024-11-26 13:32:27.230528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:38.826 [2024-11-26 13:32:27.230533] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.826 [2024-11-26 13:32:27.230539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.826 [2024-11-26 13:32:27.230545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.826 [2024-11-26 13:32:27.230550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.826 [2024-11-26 13:32:27.230556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.826 [2024-11-26 13:32:27.230562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.826 [2024-11-26 13:32:27.230568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.230576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.826 [2024-11-26 13:32:27.230582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:19:38.826 [2024-11-26 13:32:27.230587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.251101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.251128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.826 [2024-11-26 13:32:27.251136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.476 ms 00:19:38.826 [2024-11-26 13:32:27.251142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.251235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.251242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:38.826 [2024-11-26 13:32:27.251249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:38.826 [2024-11-26 13:32:27.251254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.288304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.288335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.826 [2024-11-26 13:32:27.288346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.033 ms 00:19:38.826 [2024-11-26 13:32:27.288353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.288409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.288418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.826 [2024-11-26 13:32:27.288426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:38.826 [2024-11-26 13:32:27.288431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.288729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.288761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.826 [2024-11-26 13:32:27.288769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:38.826 [2024-11-26 13:32:27.288778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.288881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.288894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.826 [2024-11-26 13:32:27.288900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:38.826 [2024-11-26 13:32:27.288907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.299537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.299563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.826 [2024-11-26 13:32:27.299571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.614 ms 00:19:38.826 [2024-11-26 13:32:27.299577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.309319] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:38.826 [2024-11-26 13:32:27.309348] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:38.826 [2024-11-26 13:32:27.309357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.309363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:38.826 [2024-11-26 13:32:27.309370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.713 ms 00:19:38.826 [2024-11-26 13:32:27.309375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.327493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.327522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:38.826 [2024-11-26 13:32:27.327531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.073 ms 00:19:38.826 [2024-11-26 13:32:27.327538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.336268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.336293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:38.826 [2024-11-26 13:32:27.336300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.680 ms 00:19:38.826 [2024-11-26 13:32:27.336306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.344958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.344982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:38.826 [2024-11-26 13:32:27.344990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.612 ms 00:19:38.826 [2024-11-26 13:32:27.344995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.826 [2024-11-26 13:32:27.345465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.826 [2024-11-26 13:32:27.345482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:38.827 [2024-11-26 13:32:27.345489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:19:38.827 [2024-11-26 13:32:27.345495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.827 [2024-11-26 13:32:27.388806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.827 [2024-11-26 13:32:27.388841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:38.827 [2024-11-26 13:32:27.388850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.295 ms 00:19:38.827 [2024-11-26 13:32:27.388856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.086 [2024-11-26 13:32:27.396561] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:39.086 [2024-11-26 13:32:27.407845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.087 [2024-11-26 13:32:27.407874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:39.087 [2024-11-26 13:32:27.407884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.930 ms 00:19:39.087 [2024-11-26 13:32:27.407890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.087 [2024-11-26 13:32:27.407960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.087 [2024-11-26 13:32:27.407967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:39.087 [2024-11-26 13:32:27.407974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:39.087 [2024-11-26 13:32:27.407980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.087 [2024-11-26 13:32:27.408015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.087 [2024-11-26 13:32:27.408022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:39.087 [2024-11-26 13:32:27.408028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:39.087 [2024-11-26 13:32:27.408034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.087 [2024-11-26 13:32:27.408054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.087 [2024-11-26 13:32:27.408063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:39.087 [2024-11-26 13:32:27.408069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:39.087 [2024-11-26 13:32:27.408075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.087 [2024-11-26 13:32:27.408096] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:39.087 [2024-11-26 13:32:27.408107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.087 [2024-11-26 13:32:27.408113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:39.087 [2024-11-26 13:32:27.408120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:39.087 [2024-11-26 13:32:27.408126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.087 [2024-11-26 13:32:27.426223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.087 [2024-11-26 13:32:27.426251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:39.087 [2024-11-26 13:32:27.426260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.082 ms 00:19:39.087 [2024-11-26 13:32:27.426266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.087 [2024-11-26 13:32:27.426334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.087 [2024-11-26 13:32:27.426343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:39.087 [2024-11-26 13:32:27.426349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:39.087 [2024-11-26 13:32:27.426355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.087 [2024-11-26 13:32:27.427239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:39.087 [2024-11-26 13:32:27.429484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 217.551 ms, result 0 00:19:39.087 [2024-11-26 13:32:27.430137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:39.087 [2024-11-26 13:32:27.444859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.028  [2024-11-26T13:32:29.575Z] Copying: 33/256 [MB] (33 MBps) [2024-11-26T13:32:30.511Z] Copying: 51/256 [MB] (18 MBps) [2024-11-26T13:32:31.887Z] Copying: 76/256 [MB] (24 MBps) [2024-11-26T13:32:32.455Z] Copying: 101/256 [MB] (24 MBps) [2024-11-26T13:32:33.835Z] Copying: 124/256 [MB] (22 MBps) [2024-11-26T13:32:34.781Z] Copying: 146/256 [MB] (21 MBps) [2024-11-26T13:32:35.723Z] Copying: 159/256 [MB] (13 MBps) [2024-11-26T13:32:36.668Z] Copying: 181/256 [MB] (22 MBps) [2024-11-26T13:32:37.610Z] Copying: 200/256 [MB] (18 MBps) [2024-11-26T13:32:38.553Z] Copying: 229/256 [MB] (28 MBps) [2024-11-26T13:32:38.817Z] Copying: 251/256 [MB] (22 MBps) [2024-11-26T13:32:38.817Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-26 13:32:38.652500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.247 [2024-11-26 13:32:38.660633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.247 [2024-11-26 13:32:38.660676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:50.247 [2024-11-26 13:32:38.660688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:50.247 [2024-11-26 13:32:38.660697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.247 [2024-11-26 13:32:38.660723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:50.247 [2024-11-26 13:32:38.662999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.247 [2024-11-26 13:32:38.663032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:50.247 [2024-11-26 13:32:38.663042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.264 ms 00:19:50.247 [2024-11-26 13:32:38.663049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.247 [2024-11-26 13:32:38.664634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.247 [2024-11-26 13:32:38.664672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:50.247 [2024-11-26 13:32:38.664681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:19:50.247 [2024-11-26 13:32:38.664688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.247 [2024-11-26 13:32:38.671099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.247 [2024-11-26 13:32:38.671138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:50.247 [2024-11-26 13:32:38.671152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.395 ms 00:19:50.247 [2024-11-26 13:32:38.671159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.247 [2024-11-26 13:32:38.676562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.247 [2024-11-26 13:32:38.676592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:50.247 [2024-11-26 13:32:38.676601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.361 ms 00:19:50.247 [2024-11-26 13:32:38.676608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.247 [2024-11-26 13:32:38.694785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.247 [2024-11-26 13:32:38.694816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:50.247 [2024-11-26 13:32:38.694825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.140 ms 00:19:50.247 [2024-11-26 13:32:38.694831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.247 [2024-11-26 13:32:38.707103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.247 [2024-11-26 13:32:38.707139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.247 [2024-11-26 13:32:38.707148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.241 ms 00:19:50.247 [2024-11-26 13:32:38.707156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.247 [2024-11-26 13:32:38.707251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.247 [2024-11-26 13:32:38.707258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.248 [2024-11-26 13:32:38.707265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:50.248 [2024-11-26 13:32:38.707277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.248 [2024-11-26 13:32:38.725248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.248 [2024-11-26 13:32:38.725277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:50.248 [2024-11-26 13:32:38.725285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.959 ms 00:19:50.248 [2024-11-26 13:32:38.725291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.248 [2024-11-26 13:32:38.742783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.248 [2024-11-26 13:32:38.742809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:50.248 [2024-11-26 13:32:38.742816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.456 ms 00:19:50.248 [2024-11-26 13:32:38.742822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.248 [2024-11-26 13:32:38.759857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.248 [2024-11-26 13:32:38.759883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.248 [2024-11-26 13:32:38.759891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.008 ms 00:19:50.248 [2024-11-26 13:32:38.759896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.248 [2024-11-26 13:32:38.776714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.248 [2024-11-26 13:32:38.776739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.248 [2024-11-26 13:32:38.776746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.773 ms 00:19:50.248 [2024-11-26 13:32:38.776752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.248 [2024-11-26 13:32:38.776777] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.248 [2024-11-26 13:32:38.776787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.776994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.248 [2024-11-26 13:32:38.777066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.249 [2024-11-26 13:32:38.777368] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.249 [2024-11-26 13:32:38.777374] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d09fa75c-4f30-4107-83fe-472028788725 00:19:50.249 [2024-11-26 13:32:38.777380] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.249 [2024-11-26 13:32:38.777386] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.249 [2024-11-26 13:32:38.777391] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.249 [2024-11-26 13:32:38.777397] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.249 [2024-11-26 13:32:38.777402] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.249 [2024-11-26 13:32:38.777408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.249 [2024-11-26 13:32:38.777413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.249 [2024-11-26 13:32:38.777418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.249 [2024-11-26 13:32:38.777422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.249 [2024-11-26 13:32:38.777428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.249 [2024-11-26 13:32:38.777433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.249 [2024-11-26 13:32:38.777450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:19:50.249 [2024-11-26 13:32:38.777455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.249 [2024-11-26 13:32:38.786871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.249 [2024-11-26 13:32:38.786895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.249 [2024-11-26 13:32:38.786902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.403 ms 00:19:50.249 [2024-11-26 13:32:38.786908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.249 [2024-11-26 13:32:38.787184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.249 [2024-11-26 13:32:38.787200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.249 [2024-11-26 13:32:38.787207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:19:50.250 [2024-11-26 13:32:38.787213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.509 [2024-11-26 13:32:38.814342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.509 [2024-11-26 13:32:38.814370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.509 [2024-11-26 13:32:38.814377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.509 [2024-11-26 13:32:38.814383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.509 [2024-11-26 13:32:38.814462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.509 [2024-11-26 13:32:38.814470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.509 [2024-11-26 13:32:38.814476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.509 [2024-11-26 13:32:38.814482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.509 [2024-11-26 13:32:38.814512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.509 [2024-11-26 13:32:38.814519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.509 [2024-11-26 13:32:38.814525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.509 [2024-11-26 13:32:38.814531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.509 [2024-11-26 13:32:38.814545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.509 [2024-11-26 13:32:38.814552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.509 [2024-11-26 13:32:38.814558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.509 [2024-11-26 13:32:38.814563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.509 [2024-11-26 13:32:38.873911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.509 [2024-11-26 13:32:38.873941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.509 [2024-11-26 13:32:38.873948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.509 [2024-11-26 13:32:38.873954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.509 [2024-11-26 13:32:38.922643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.509 [2024-11-26 13:32:38.922679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.509 [2024-11-26 13:32:38.922686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.509 [2024-11-26 13:32:38.922692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.509 [2024-11-26 13:32:38.922741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.509 [2024-11-26 13:32:38.922748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.509 [2024-11-26 13:32:38.922755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.510 [2024-11-26 13:32:38.922760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.510 [2024-11-26 13:32:38.922789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.510 [2024-11-26 13:32:38.922796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.510 [2024-11-26 13:32:38.922803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.510 [2024-11-26 13:32:38.922809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.510 [2024-11-26 13:32:38.922875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.510 [2024-11-26 13:32:38.922882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.510 [2024-11-26 13:32:38.922888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.510 [2024-11-26 13:32:38.922894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.510 [2024-11-26 13:32:38.922918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.510 [2024-11-26 13:32:38.922924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.510 [2024-11-26 13:32:38.922930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.510 [2024-11-26 13:32:38.922937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.510 [2024-11-26 13:32:38.922967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.510 [2024-11-26 13:32:38.922973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.510 [2024-11-26 13:32:38.922979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.510 [2024-11-26 13:32:38.922985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.510 [2024-11-26 13:32:38.923020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.510 [2024-11-26 13:32:38.923027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.510 [2024-11-26 13:32:38.923033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.510 [2024-11-26 13:32:38.923041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.510 [2024-11-26 13:32:38.923146] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 262.508 ms, result 0 00:19:51.454 00:19:51.454 00:19:51.454 13:32:39 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76669 00:19:51.454 13:32:39 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76669 00:19:51.454 13:32:39 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:51.454 13:32:39 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76669 ']' 00:19:51.454 13:32:39 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.454 13:32:39 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.454 13:32:39 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.454 13:32:39 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.454 13:32:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:51.454 [2024-11-26 13:32:39.835926] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:19:51.454 [2024-11-26 13:32:39.836020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76669 ] 00:19:51.454 [2024-11-26 13:32:39.984944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.712 [2024-11-26 13:32:40.067393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.284 13:32:40 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.284 13:32:40 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:52.284 13:32:40 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:52.547 [2024-11-26 13:32:40.881474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.547 [2024-11-26 13:32:40.881521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.547 [2024-11-26 13:32:41.054765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.547 [2024-11-26 13:32:41.054825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:52.547 [2024-11-26 13:32:41.054839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:52.547 [2024-11-26 13:32:41.054847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.547 [2024-11-26 13:32:41.057477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.547 [2024-11-26 13:32:41.057509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.547 [2024-11-26 13:32:41.057520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:19:52.547 [2024-11-26 13:32:41.057527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.547 [2024-11-26 13:32:41.057598] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:52.547 [2024-11-26 13:32:41.058311] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:52.547 [2024-11-26 13:32:41.058336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.547 [2024-11-26 13:32:41.058344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.547 [2024-11-26 13:32:41.058353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:19:52.547 [2024-11-26 13:32:41.058361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.547 [2024-11-26 13:32:41.059513] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:52.547 [2024-11-26 13:32:41.072281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.547 [2024-11-26 13:32:41.072317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:52.547 [2024-11-26 13:32:41.072328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.774 ms 00:19:52.547 [2024-11-26 13:32:41.072337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.547 [2024-11-26 13:32:41.072414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.547 [2024-11-26 13:32:41.072426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:52.547 [2024-11-26 13:32:41.072435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:52.547 [2024-11-26 13:32:41.072455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.547 [2024-11-26 13:32:41.077241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.547 [2024-11-26 13:32:41.077275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.547 [2024-11-26 13:32:41.077283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.738 ms 00:19:52.547 [2024-11-26 13:32:41.077292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.547 [2024-11-26 13:32:41.077386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.548 [2024-11-26 13:32:41.077397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.548 [2024-11-26 13:32:41.077406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:52.548 [2024-11-26 13:32:41.077418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.548 [2024-11-26 13:32:41.077455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.548 [2024-11-26 13:32:41.077466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.548 [2024-11-26 13:32:41.077473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:52.548 [2024-11-26 13:32:41.077482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.548 [2024-11-26 13:32:41.077503] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:52.548 [2024-11-26 13:32:41.080770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.548 [2024-11-26 13:32:41.080797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.548 [2024-11-26 13:32:41.080808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.270 ms 00:19:52.548 [2024-11-26 13:32:41.080816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.548 [2024-11-26 13:32:41.080853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.548 [2024-11-26 13:32:41.080861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.548 [2024-11-26 13:32:41.080870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:52.548 [2024-11-26 13:32:41.080879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.548 [2024-11-26 13:32:41.080900] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:52.548 [2024-11-26 13:32:41.080916] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:52.548 [2024-11-26 13:32:41.080956] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:52.548 [2024-11-26 13:32:41.080970] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:52.548 [2024-11-26 13:32:41.081074] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.548 [2024-11-26 13:32:41.081090] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.548 [2024-11-26 13:32:41.081107] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:52.548 [2024-11-26 13:32:41.081116] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081126] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081134] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:52.548 [2024-11-26 13:32:41.081143] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.548 [2024-11-26 13:32:41.081150] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.548 [2024-11-26 13:32:41.081161] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.548 [2024-11-26 13:32:41.081168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.548 [2024-11-26 13:32:41.081177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.548 [2024-11-26 13:32:41.081184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:19:52.548 [2024-11-26 13:32:41.081193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.548 [2024-11-26 13:32:41.081280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.548 [2024-11-26 13:32:41.081289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.548 [2024-11-26 13:32:41.081296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:52.548 [2024-11-26 13:32:41.081305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.548 [2024-11-26 13:32:41.081413] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.548 [2024-11-26 13:32:41.081430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.548 [2024-11-26 13:32:41.081438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.548 [2024-11-26 13:32:41.081481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.548 [2024-11-26 13:32:41.081507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.548 [2024-11-26 13:32:41.081521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.548 [2024-11-26 13:32:41.081530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:52.548 [2024-11-26 13:32:41.081537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.548 [2024-11-26 13:32:41.081545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.548 [2024-11-26 13:32:41.081552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:52.548 [2024-11-26 13:32:41.081560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.548 [2024-11-26 13:32:41.081576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.548 [2024-11-26 13:32:41.081602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.548 [2024-11-26 13:32:41.081626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.548 [2024-11-26 13:32:41.081647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.548 [2024-11-26 13:32:41.081669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.548 [2024-11-26 13:32:41.081691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.548 [2024-11-26 13:32:41.081706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.548 [2024-11-26 13:32:41.081714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:52.548 [2024-11-26 13:32:41.081720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.548 [2024-11-26 13:32:41.081728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.548 [2024-11-26 13:32:41.081734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:52.548 [2024-11-26 13:32:41.081744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.548 [2024-11-26 13:32:41.081759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:52.548 [2024-11-26 13:32:41.081765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081773] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.548 [2024-11-26 13:32:41.081783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.548 [2024-11-26 13:32:41.081792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.548 [2024-11-26 13:32:41.081808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.548 [2024-11-26 13:32:41.081816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.548 [2024-11-26 13:32:41.081824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.548 [2024-11-26 13:32:41.081831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.548 [2024-11-26 13:32:41.081839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.548 [2024-11-26 13:32:41.081845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.548 [2024-11-26 13:32:41.081855] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.548 [2024-11-26 13:32:41.081864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.548 [2024-11-26 13:32:41.081875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:52.548 [2024-11-26 13:32:41.081882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:52.548 [2024-11-26 13:32:41.081892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:52.548 [2024-11-26 13:32:41.081899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:52.548 [2024-11-26 13:32:41.081907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:52.548 [2024-11-26 13:32:41.081914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:52.548 [2024-11-26 13:32:41.081922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:52.548 [2024-11-26 13:32:41.081929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:52.548 [2024-11-26 13:32:41.081938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:52.549 [2024-11-26 13:32:41.081945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:52.549 [2024-11-26 13:32:41.081953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:52.549 [2024-11-26 13:32:41.081960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:52.549 [2024-11-26 13:32:41.081968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:52.549 [2024-11-26 13:32:41.081975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:52.549 [2024-11-26 13:32:41.081984] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.549 [2024-11-26 13:32:41.081992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.549 [2024-11-26 13:32:41.082002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.549 [2024-11-26 13:32:41.082009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.549 [2024-11-26 13:32:41.082018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.549 [2024-11-26 13:32:41.082026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.549 [2024-11-26 13:32:41.082035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.549 [2024-11-26 13:32:41.082042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.549 [2024-11-26 13:32:41.082051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:19:52.549 [2024-11-26 13:32:41.082060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.549 [2024-11-26 13:32:41.107720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.549 [2024-11-26 13:32:41.107755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.549 [2024-11-26 13:32:41.107768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.604 ms 00:19:52.549 [2024-11-26 13:32:41.107778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.549 [2024-11-26 13:32:41.107892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.549 [2024-11-26 13:32:41.107902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:52.549 [2024-11-26 13:32:41.107912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:52.549 [2024-11-26 13:32:41.107919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.138110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.138143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.812 [2024-11-26 13:32:41.138155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.168 ms 00:19:52.812 [2024-11-26 13:32:41.138162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.138215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.138224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.812 [2024-11-26 13:32:41.138234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.812 [2024-11-26 13:32:41.138241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.138583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.138605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.812 [2024-11-26 13:32:41.138618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:19:52.812 [2024-11-26 13:32:41.138625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.138746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.138759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.812 [2024-11-26 13:32:41.138769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:52.812 [2024-11-26 13:32:41.138776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.153080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.153110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.812 [2024-11-26 13:32:41.153121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.265 ms 00:19:52.812 [2024-11-26 13:32:41.153128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.165858] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:52.812 [2024-11-26 13:32:41.165889] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:52.812 [2024-11-26 13:32:41.165901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.165909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:52.812 [2024-11-26 13:32:41.165919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.664 ms 00:19:52.812 [2024-11-26 13:32:41.165932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.190211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.190245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:52.812 [2024-11-26 13:32:41.190257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.211 ms 00:19:52.812 [2024-11-26 13:32:41.190264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.202000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.202028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:52.812 [2024-11-26 13:32:41.202041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.667 ms 00:19:52.812 [2024-11-26 13:32:41.202048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.213766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.213795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:52.812 [2024-11-26 13:32:41.213806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.654 ms 00:19:52.812 [2024-11-26 13:32:41.213813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.214410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.214432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.812 [2024-11-26 13:32:41.214459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:19:52.812 [2024-11-26 13:32:41.214467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.279898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.279949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:52.812 [2024-11-26 13:32:41.279965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.406 ms 00:19:52.812 [2024-11-26 13:32:41.279973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.290207] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:52.812 [2024-11-26 13:32:41.303940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.303980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.812 [2024-11-26 13:32:41.303993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.881 ms 00:19:52.812 [2024-11-26 13:32:41.304002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.304069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.304081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:52.812 [2024-11-26 13:32:41.304089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:52.812 [2024-11-26 13:32:41.304099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.304146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.304156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.812 [2024-11-26 13:32:41.304164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:52.812 [2024-11-26 13:32:41.304175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.304198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.304207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.812 [2024-11-26 13:32:41.304215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.812 [2024-11-26 13:32:41.304224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.304254] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:52.812 [2024-11-26 13:32:41.304266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.304275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:52.812 [2024-11-26 13:32:41.304284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:52.812 [2024-11-26 13:32:41.304291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.329044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.329084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.812 [2024-11-26 13:32:41.329099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.727 ms 00:19:52.812 [2024-11-26 13:32:41.329107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.329200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.812 [2024-11-26 13:32:41.329211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.812 [2024-11-26 13:32:41.329223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:52.812 [2024-11-26 13:32:41.329230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.812 [2024-11-26 13:32:41.330366] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.812 [2024-11-26 13:32:41.333399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.328 ms, result 0 00:19:52.812 [2024-11-26 13:32:41.334979] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.812 Some configs were skipped because the RPC state that can call them passed over. 00:19:52.812 13:32:41 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:53.074 [2024-11-26 13:32:41.561794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.074 [2024-11-26 13:32:41.561844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:53.074 [2024-11-26 13:32:41.561855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.887 ms 00:19:53.074 [2024-11-26 13:32:41.561865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.074 [2024-11-26 13:32:41.561896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.990 ms, result 0 00:19:53.074 true 00:19:53.074 13:32:41 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:53.335 [2024-11-26 13:32:41.766747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.335 [2024-11-26 13:32:41.766794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:53.335 [2024-11-26 13:32:41.766807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:19:53.335 [2024-11-26 13:32:41.766815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.335 [2024-11-26 13:32:41.766849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.713 ms, result 0 00:19:53.335 true 00:19:53.335 13:32:41 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76669 00:19:53.335 13:32:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76669 ']' 00:19:53.335 13:32:41 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76669 00:19:53.335 13:32:41 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:53.335 13:32:41 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.335 13:32:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76669 00:19:53.335 13:32:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.335 13:32:41 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.335 killing process with pid 76669 00:19:53.335 13:32:41 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76669' 00:19:53.335 13:32:41 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76669 00:19:53.335 13:32:41 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76669 00:19:54.281 [2024-11-26 13:32:42.487720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.487768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:54.281 [2024-11-26 13:32:42.487779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:54.281 [2024-11-26 13:32:42.487787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.487805] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:54.281 [2024-11-26 13:32:42.489905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.489930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:54.281 [2024-11-26 13:32:42.489941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.086 ms 00:19:54.281 [2024-11-26 13:32:42.489948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.490166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.490186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:54.281 [2024-11-26 13:32:42.490194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:19:54.281 [2024-11-26 13:32:42.490200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.493463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.493490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:54.281 [2024-11-26 13:32:42.493501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.247 ms 00:19:54.281 [2024-11-26 13:32:42.493508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.498738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.498763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:54.281 [2024-11-26 13:32:42.498772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.201 ms 00:19:54.281 [2024-11-26 13:32:42.498778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.506549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.506579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:54.281 [2024-11-26 13:32:42.506589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.720 ms 00:19:54.281 [2024-11-26 13:32:42.506595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.512983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.513013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:54.281 [2024-11-26 13:32:42.513021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.357 ms 00:19:54.281 [2024-11-26 13:32:42.513028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.513128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.513135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:54.281 [2024-11-26 13:32:42.513144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:54.281 [2024-11-26 13:32:42.513149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.520989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.521014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:54.281 [2024-11-26 13:32:42.521022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.824 ms 00:19:54.281 [2024-11-26 13:32:42.521028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.528469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.528495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:54.281 [2024-11-26 13:32:42.528505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.414 ms 00:19:54.281 [2024-11-26 13:32:42.528510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.535415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.535447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:54.281 [2024-11-26 13:32:42.535455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.875 ms 00:19:54.281 [2024-11-26 13:32:42.535461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.542376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.281 [2024-11-26 13:32:42.542402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:54.281 [2024-11-26 13:32:42.542410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.869 ms 00:19:54.281 [2024-11-26 13:32:42.542415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.281 [2024-11-26 13:32:42.542450] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:54.281 [2024-11-26 13:32:42.542461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:54.281 [2024-11-26 13:32:42.542470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:54.281 [2024-11-26 13:32:42.542476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:54.281 [2024-11-26 13:32:42.542483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:54.281 [2024-11-26 13:32:42.542489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.542999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:54.282 [2024-11-26 13:32:42.543075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:54.283 [2024-11-26 13:32:42.543082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:54.283 [2024-11-26 13:32:42.543088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:54.283 [2024-11-26 13:32:42.543095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:54.283 [2024-11-26 13:32:42.543101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:54.283 [2024-11-26 13:32:42.543108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:54.283 [2024-11-26 13:32:42.543123] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:54.283 [2024-11-26 13:32:42.543133] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d09fa75c-4f30-4107-83fe-472028788725 00:19:54.283 [2024-11-26 13:32:42.543140] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:54.283 [2024-11-26 13:32:42.543147] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:54.283 [2024-11-26 13:32:42.543152] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:54.283 [2024-11-26 13:32:42.543160] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:54.283 [2024-11-26 13:32:42.543165] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:54.283 [2024-11-26 13:32:42.543172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:54.283 [2024-11-26 13:32:42.543178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:54.283 [2024-11-26 13:32:42.543183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:54.283 [2024-11-26 13:32:42.543188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:54.283 [2024-11-26 13:32:42.543195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.283 [2024-11-26 13:32:42.543200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:54.283 [2024-11-26 13:32:42.543207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:19:54.283 [2024-11-26 13:32:42.543213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.552524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.283 [2024-11-26 13:32:42.552548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:54.283 [2024-11-26 13:32:42.552558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.289 ms 00:19:54.283 [2024-11-26 13:32:42.552564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.552846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.283 [2024-11-26 13:32:42.552863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:54.283 [2024-11-26 13:32:42.552873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:19:54.283 [2024-11-26 13:32:42.552878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.587545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.587573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.283 [2024-11-26 13:32:42.587582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.587589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.587660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.587667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.283 [2024-11-26 13:32:42.587676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.587682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.587717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.587724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.283 [2024-11-26 13:32:42.587733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.587738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.587752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.587758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.283 [2024-11-26 13:32:42.587765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.587771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.646196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.646229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.283 [2024-11-26 13:32:42.646238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.646245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.693416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.693456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.283 [2024-11-26 13:32:42.693466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.693474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.693531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.693539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.283 [2024-11-26 13:32:42.693548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.693554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.693577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.693584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.283 [2024-11-26 13:32:42.693591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.693597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.693668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.693676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.283 [2024-11-26 13:32:42.693683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.693688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.693713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.693720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:54.283 [2024-11-26 13:32:42.693727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.693733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.693764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.693771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.283 [2024-11-26 13:32:42.693780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.693785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.693820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.283 [2024-11-26 13:32:42.693827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.283 [2024-11-26 13:32:42.693834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.283 [2024-11-26 13:32:42.693839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.283 [2024-11-26 13:32:42.693943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 206.204 ms, result 0 00:19:54.859 13:32:43 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:54.859 13:32:43 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:54.859 [2024-11-26 13:32:43.269816] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:19:54.859 [2024-11-26 13:32:43.270092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76716 ] 00:19:55.121 [2024-11-26 13:32:43.426168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.121 [2024-11-26 13:32:43.500274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.383 [2024-11-26 13:32:43.704108] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.383 [2024-11-26 13:32:43.704156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.383 [2024-11-26 13:32:43.855872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.383 [2024-11-26 13:32:43.855908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:55.383 [2024-11-26 13:32:43.855919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.383 [2024-11-26 13:32:43.855925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.383 [2024-11-26 13:32:43.857935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.383 [2024-11-26 13:32:43.857964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.383 [2024-11-26 13:32:43.857972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:19:55.383 [2024-11-26 13:32:43.857978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.383 [2024-11-26 13:32:43.858031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:55.383 [2024-11-26 13:32:43.858580] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:55.383 [2024-11-26 13:32:43.858601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.383 [2024-11-26 13:32:43.858607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.383 [2024-11-26 13:32:43.858614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:19:55.383 [2024-11-26 13:32:43.858619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.383 [2024-11-26 13:32:43.859581] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:55.383 [2024-11-26 13:32:43.868896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.383 [2024-11-26 13:32:43.868926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:55.383 [2024-11-26 13:32:43.868934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.315 ms 00:19:55.383 [2024-11-26 13:32:43.868940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.383 [2024-11-26 13:32:43.869008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.383 [2024-11-26 13:32:43.869017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:55.383 [2024-11-26 13:32:43.869023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:55.383 [2024-11-26 13:32:43.869029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.383 [2024-11-26 13:32:43.873271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.383 [2024-11-26 13:32:43.873295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.383 [2024-11-26 13:32:43.873302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.214 ms 00:19:55.384 [2024-11-26 13:32:43.873308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.384 [2024-11-26 13:32:43.873377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.384 [2024-11-26 13:32:43.873385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.384 [2024-11-26 13:32:43.873391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:55.384 [2024-11-26 13:32:43.873397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.384 [2024-11-26 13:32:43.873416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.384 [2024-11-26 13:32:43.873422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:55.384 [2024-11-26 13:32:43.873428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:55.384 [2024-11-26 13:32:43.873433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.384 [2024-11-26 13:32:43.873466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:55.384 [2024-11-26 13:32:43.876135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.384 [2024-11-26 13:32:43.876160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.384 [2024-11-26 13:32:43.876167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.670 ms 00:19:55.384 [2024-11-26 13:32:43.876173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.384 [2024-11-26 13:32:43.876199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.384 [2024-11-26 13:32:43.876206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:55.384 [2024-11-26 13:32:43.876212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:55.384 [2024-11-26 13:32:43.876218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.384 [2024-11-26 13:32:43.876236] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:55.384 [2024-11-26 13:32:43.876249] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:55.384 [2024-11-26 13:32:43.876281] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:55.384 [2024-11-26 13:32:43.876296] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:55.384 [2024-11-26 13:32:43.876374] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:55.384 [2024-11-26 13:32:43.876382] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:55.384 [2024-11-26 13:32:43.876389] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:55.384 [2024-11-26 13:32:43.876400] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876406] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876412] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:55.384 [2024-11-26 13:32:43.876418] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:55.384 [2024-11-26 13:32:43.876424] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:55.384 [2024-11-26 13:32:43.876430] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:55.384 [2024-11-26 13:32:43.876435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.384 [2024-11-26 13:32:43.876451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:55.384 [2024-11-26 13:32:43.876457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:19:55.384 [2024-11-26 13:32:43.876462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.384 [2024-11-26 13:32:43.876533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.384 [2024-11-26 13:32:43.876542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:55.384 [2024-11-26 13:32:43.876548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:55.384 [2024-11-26 13:32:43.876554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.384 [2024-11-26 13:32:43.876630] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:55.384 [2024-11-26 13:32:43.876637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:55.384 [2024-11-26 13:32:43.876643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:55.384 [2024-11-26 13:32:43.876660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:55.384 [2024-11-26 13:32:43.876676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.384 [2024-11-26 13:32:43.876686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:55.384 [2024-11-26 13:32:43.876695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:55.384 [2024-11-26 13:32:43.876700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.384 [2024-11-26 13:32:43.876704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:55.384 [2024-11-26 13:32:43.876711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:55.384 [2024-11-26 13:32:43.876716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:55.384 [2024-11-26 13:32:43.876727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:55.384 [2024-11-26 13:32:43.876742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:55.384 [2024-11-26 13:32:43.876757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:55.384 [2024-11-26 13:32:43.876772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:55.384 [2024-11-26 13:32:43.876786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:55.384 [2024-11-26 13:32:43.876801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.384 [2024-11-26 13:32:43.876811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:55.384 [2024-11-26 13:32:43.876816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:55.384 [2024-11-26 13:32:43.876820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.384 [2024-11-26 13:32:43.876825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:55.384 [2024-11-26 13:32:43.876830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:55.384 [2024-11-26 13:32:43.876835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:55.384 [2024-11-26 13:32:43.876845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:55.384 [2024-11-26 13:32:43.876850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876855] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:55.384 [2024-11-26 13:32:43.876861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:55.384 [2024-11-26 13:32:43.876868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.384 [2024-11-26 13:32:43.876879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:55.384 [2024-11-26 13:32:43.876885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:55.384 [2024-11-26 13:32:43.876890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:55.384 [2024-11-26 13:32:43.876895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:55.384 [2024-11-26 13:32:43.876899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:55.384 [2024-11-26 13:32:43.876904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:55.384 [2024-11-26 13:32:43.876911] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:55.384 [2024-11-26 13:32:43.876918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.384 [2024-11-26 13:32:43.876924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:55.384 [2024-11-26 13:32:43.876930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:55.384 [2024-11-26 13:32:43.876935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:55.384 [2024-11-26 13:32:43.876940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:55.384 [2024-11-26 13:32:43.876945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:55.384 [2024-11-26 13:32:43.876950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:55.384 [2024-11-26 13:32:43.876956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:55.384 [2024-11-26 13:32:43.876961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:55.385 [2024-11-26 13:32:43.876966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:55.385 [2024-11-26 13:32:43.876971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:55.385 [2024-11-26 13:32:43.876976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:55.385 [2024-11-26 13:32:43.876983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:55.385 [2024-11-26 13:32:43.876988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:55.385 [2024-11-26 13:32:43.876994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:55.385 [2024-11-26 13:32:43.876999] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:55.385 [2024-11-26 13:32:43.877005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.385 [2024-11-26 13:32:43.877011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:55.385 [2024-11-26 13:32:43.877017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:55.385 [2024-11-26 13:32:43.877022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:55.385 [2024-11-26 13:32:43.877028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:55.385 [2024-11-26 13:32:43.877034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.385 [2024-11-26 13:32:43.877042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:55.385 [2024-11-26 13:32:43.877047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:19:55.385 [2024-11-26 13:32:43.877053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.385 [2024-11-26 13:32:43.897664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.385 [2024-11-26 13:32:43.897691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.385 [2024-11-26 13:32:43.897698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.572 ms 00:19:55.385 [2024-11-26 13:32:43.897704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.385 [2024-11-26 13:32:43.897795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.385 [2024-11-26 13:32:43.897802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:55.385 [2024-11-26 13:32:43.897809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:55.385 [2024-11-26 13:32:43.897814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.385 [2024-11-26 13:32:43.936005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.385 [2024-11-26 13:32:43.936044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.385 [2024-11-26 13:32:43.936055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.174 ms 00:19:55.385 [2024-11-26 13:32:43.936061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.385 [2024-11-26 13:32:43.936117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.385 [2024-11-26 13:32:43.936126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.385 [2024-11-26 13:32:43.936132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:55.385 [2024-11-26 13:32:43.936138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.385 [2024-11-26 13:32:43.936411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.385 [2024-11-26 13:32:43.936428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.385 [2024-11-26 13:32:43.936436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:19:55.385 [2024-11-26 13:32:43.936455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.385 [2024-11-26 13:32:43.936557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.385 [2024-11-26 13:32:43.936571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.385 [2024-11-26 13:32:43.936577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:55.385 [2024-11-26 13:32:43.936583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.385 [2024-11-26 13:32:43.947271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.385 [2024-11-26 13:32:43.947298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.385 [2024-11-26 13:32:43.947305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.672 ms 00:19:55.385 [2024-11-26 13:32:43.947311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:43.956958] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:55.648 [2024-11-26 13:32:43.956987] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:55.648 [2024-11-26 13:32:43.956996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:43.957002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:55.648 [2024-11-26 13:32:43.957009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.602 ms 00:19:55.648 [2024-11-26 13:32:43.957014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:43.975219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:43.975248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:55.648 [2024-11-26 13:32:43.975257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.159 ms 00:19:55.648 [2024-11-26 13:32:43.975263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:43.983920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:43.983945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:55.648 [2024-11-26 13:32:43.983952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.604 ms 00:19:55.648 [2024-11-26 13:32:43.983958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:43.992662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:43.992688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:55.648 [2024-11-26 13:32:43.992695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.665 ms 00:19:55.648 [2024-11-26 13:32:43.992701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:43.993160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:43.993180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:55.648 [2024-11-26 13:32:43.993187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:19:55.648 [2024-11-26 13:32:43.993193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:44.036018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:44.036054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:55.648 [2024-11-26 13:32:44.036064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.808 ms 00:19:55.648 [2024-11-26 13:32:44.036071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:44.043682] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:55.648 [2024-11-26 13:32:44.054994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:44.055023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:55.648 [2024-11-26 13:32:44.055032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.857 ms 00:19:55.648 [2024-11-26 13:32:44.055042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:44.055110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:44.055118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:55.648 [2024-11-26 13:32:44.055125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:55.648 [2024-11-26 13:32:44.055131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:44.055166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:44.055173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:55.648 [2024-11-26 13:32:44.055179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:55.648 [2024-11-26 13:32:44.055187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:44.055208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:44.055214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:55.648 [2024-11-26 13:32:44.055220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.648 [2024-11-26 13:32:44.055225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:44.055248] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:55.648 [2024-11-26 13:32:44.055254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:44.055260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:55.648 [2024-11-26 13:32:44.055266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:55.648 [2024-11-26 13:32:44.055272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:44.072861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:44.072888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:55.648 [2024-11-26 13:32:44.072897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.575 ms 00:19:55.648 [2024-11-26 13:32:44.072903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:44.072973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.648 [2024-11-26 13:32:44.072981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:55.648 [2024-11-26 13:32:44.072988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:55.648 [2024-11-26 13:32:44.072994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.648 [2024-11-26 13:32:44.073695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.648 [2024-11-26 13:32:44.076057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 217.582 ms, result 0 00:19:55.648 [2024-11-26 13:32:44.076688] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.648 [2024-11-26 13:32:44.091297] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.594  [2024-11-26T13:32:46.110Z] Copying: 28/256 [MB] (28 MBps) [2024-11-26T13:32:47.503Z] Copying: 50/256 [MB] (21 MBps) [2024-11-26T13:32:48.450Z] Copying: 69/256 [MB] (19 MBps) [2024-11-26T13:32:49.395Z] Copying: 90/256 [MB] (20 MBps) [2024-11-26T13:32:50.338Z] Copying: 112/256 [MB] (21 MBps) [2024-11-26T13:32:51.281Z] Copying: 128/256 [MB] (16 MBps) [2024-11-26T13:32:52.224Z] Copying: 146/256 [MB] (17 MBps) [2024-11-26T13:32:53.160Z] Copying: 164/256 [MB] (18 MBps) [2024-11-26T13:32:54.537Z] Copying: 188/256 [MB] (24 MBps) [2024-11-26T13:32:55.219Z] Copying: 213/256 [MB] (25 MBps) [2024-11-26T13:32:56.185Z] Copying: 229/256 [MB] (15 MBps) [2024-11-26T13:32:57.126Z] Copying: 246/256 [MB] (17 MBps) [2024-11-26T13:32:57.126Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-26 13:32:56.903075] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:08.556 [2024-11-26 13:32:56.912729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:56.912774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:08.556 [2024-11-26 13:32:56.912789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:08.556 [2024-11-26 13:32:56.912804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:56.912827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:08.556 [2024-11-26 13:32:56.915703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:56.915740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:08.556 [2024-11-26 13:32:56.915750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:20:08.556 [2024-11-26 13:32:56.915758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:56.916017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:56.916027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:08.556 [2024-11-26 13:32:56.916036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:08.556 [2024-11-26 13:32:56.916043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:56.919731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:56.919761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:08.556 [2024-11-26 13:32:56.919771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.673 ms 00:20:08.556 [2024-11-26 13:32:56.919779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:56.926664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:56.926697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:08.556 [2024-11-26 13:32:56.926707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.869 ms 00:20:08.556 [2024-11-26 13:32:56.926715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:56.951070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:56.951121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:08.556 [2024-11-26 13:32:56.951133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.297 ms 00:20:08.556 [2024-11-26 13:32:56.951141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:56.967711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:56.967769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:08.556 [2024-11-26 13:32:56.967789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.521 ms 00:20:08.556 [2024-11-26 13:32:56.967797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:56.967956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:56.967968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:08.556 [2024-11-26 13:32:56.967989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:08.556 [2024-11-26 13:32:56.967997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:56.993746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:56.993799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:08.556 [2024-11-26 13:32:56.993811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.732 ms 00:20:08.556 [2024-11-26 13:32:56.993817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:57.019711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:57.019763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:08.556 [2024-11-26 13:32:57.019774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.830 ms 00:20:08.556 [2024-11-26 13:32:57.019780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:57.044379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:57.044435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:08.556 [2024-11-26 13:32:57.044457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.535 ms 00:20:08.556 [2024-11-26 13:32:57.044464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:57.069486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.556 [2024-11-26 13:32:57.069536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:08.556 [2024-11-26 13:32:57.069548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.938 ms 00:20:08.556 [2024-11-26 13:32:57.069555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.556 [2024-11-26 13:32:57.069606] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:08.556 [2024-11-26 13:32:57.069622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:08.556 [2024-11-26 13:32:57.069769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.069993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:08.557 [2024-11-26 13:32:57.070419] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:08.557 [2024-11-26 13:32:57.070428] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d09fa75c-4f30-4107-83fe-472028788725 00:20:08.557 [2024-11-26 13:32:57.070436] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:08.557 [2024-11-26 13:32:57.070459] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:08.557 [2024-11-26 13:32:57.070467] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:08.557 [2024-11-26 13:32:57.070475] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:08.557 [2024-11-26 13:32:57.070483] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:08.557 [2024-11-26 13:32:57.070491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:08.557 [2024-11-26 13:32:57.070498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:08.557 [2024-11-26 13:32:57.070506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:08.557 [2024-11-26 13:32:57.070513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:08.557 [2024-11-26 13:32:57.070520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.558 [2024-11-26 13:32:57.070531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:08.558 [2024-11-26 13:32:57.070540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:20:08.558 [2024-11-26 13:32:57.070547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.558 [2024-11-26 13:32:57.084302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.558 [2024-11-26 13:32:57.084350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:08.558 [2024-11-26 13:32:57.084363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.720 ms 00:20:08.558 [2024-11-26 13:32:57.084370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.558 [2024-11-26 13:32:57.084808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.558 [2024-11-26 13:32:57.084828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:08.558 [2024-11-26 13:32:57.084837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:20:08.558 [2024-11-26 13:32:57.084844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.819 [2024-11-26 13:32:57.123846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.819 [2024-11-26 13:32:57.123902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.819 [2024-11-26 13:32:57.123914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.819 [2024-11-26 13:32:57.123923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.819 [2024-11-26 13:32:57.124025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.819 [2024-11-26 13:32:57.124036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.819 [2024-11-26 13:32:57.124045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.819 [2024-11-26 13:32:57.124053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.819 [2024-11-26 13:32:57.124108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.819 [2024-11-26 13:32:57.124118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.819 [2024-11-26 13:32:57.124127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.819 [2024-11-26 13:32:57.124136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.819 [2024-11-26 13:32:57.124156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.819 [2024-11-26 13:32:57.124165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.819 [2024-11-26 13:32:57.124173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.819 [2024-11-26 13:32:57.124181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.819 [2024-11-26 13:32:57.208037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.819 [2024-11-26 13:32:57.208104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.819 [2024-11-26 13:32:57.208118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.819 [2024-11-26 13:32:57.208126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.819 [2024-11-26 13:32:57.278147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.819 [2024-11-26 13:32:57.278213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.819 [2024-11-26 13:32:57.278227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.819 [2024-11-26 13:32:57.278236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.819 [2024-11-26 13:32:57.278297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.819 [2024-11-26 13:32:57.278307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.819 [2024-11-26 13:32:57.278317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.819 [2024-11-26 13:32:57.278326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.820 [2024-11-26 13:32:57.278358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.820 [2024-11-26 13:32:57.278369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.820 [2024-11-26 13:32:57.278385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.820 [2024-11-26 13:32:57.278394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.820 [2024-11-26 13:32:57.278521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.820 [2024-11-26 13:32:57.278533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.820 [2024-11-26 13:32:57.278541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.820 [2024-11-26 13:32:57.278550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.820 [2024-11-26 13:32:57.278588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.820 [2024-11-26 13:32:57.278598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:08.820 [2024-11-26 13:32:57.278610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.820 [2024-11-26 13:32:57.278618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.820 [2024-11-26 13:32:57.278663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.820 [2024-11-26 13:32:57.278674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.820 [2024-11-26 13:32:57.278682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.820 [2024-11-26 13:32:57.278690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.820 [2024-11-26 13:32:57.278738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.820 [2024-11-26 13:32:57.278750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.820 [2024-11-26 13:32:57.278761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.820 [2024-11-26 13:32:57.278769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.820 [2024-11-26 13:32:57.278945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.197 ms, result 0 00:20:09.759 00:20:09.759 00:20:09.759 13:32:57 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:09.759 13:32:57 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:10.020 13:32:58 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:10.282 [2024-11-26 13:32:58.611846] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:20:10.282 [2024-11-26 13:32:58.612002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76876 ] 00:20:10.282 [2024-11-26 13:32:58.772283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.541 [2024-11-26 13:32:58.903720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.800 [2024-11-26 13:32:59.201676] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.800 [2024-11-26 13:32:59.201766] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.800 [2024-11-26 13:32:59.360953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.800 [2024-11-26 13:32:59.361007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:10.800 [2024-11-26 13:32:59.361021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:10.800 [2024-11-26 13:32:59.361030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.800 [2024-11-26 13:32:59.363747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.800 [2024-11-26 13:32:59.363786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.800 [2024-11-26 13:32:59.363796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.699 ms 00:20:10.800 [2024-11-26 13:32:59.363804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.800 [2024-11-26 13:32:59.363881] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:10.800 [2024-11-26 13:32:59.364592] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:10.800 [2024-11-26 13:32:59.364618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.800 [2024-11-26 13:32:59.364626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.800 [2024-11-26 13:32:59.364635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:20:10.800 [2024-11-26 13:32:59.364642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.800 [2024-11-26 13:32:59.365852] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:11.062 [2024-11-26 13:32:59.378877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.062 [2024-11-26 13:32:59.378916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:11.062 [2024-11-26 13:32:59.378929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.026 ms 00:20:11.062 [2024-11-26 13:32:59.378937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.062 [2024-11-26 13:32:59.379030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.062 [2024-11-26 13:32:59.379041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:11.062 [2024-11-26 13:32:59.379050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:11.062 [2024-11-26 13:32:59.379057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.062 [2024-11-26 13:32:59.384259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.062 [2024-11-26 13:32:59.384291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.062 [2024-11-26 13:32:59.384302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.161 ms 00:20:11.062 [2024-11-26 13:32:59.384310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.062 [2024-11-26 13:32:59.384394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.062 [2024-11-26 13:32:59.384403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.062 [2024-11-26 13:32:59.384411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:11.062 [2024-11-26 13:32:59.384419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.062 [2024-11-26 13:32:59.384460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.062 [2024-11-26 13:32:59.384469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:11.062 [2024-11-26 13:32:59.384476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:11.062 [2024-11-26 13:32:59.384484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.062 [2024-11-26 13:32:59.384504] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:11.062 [2024-11-26 13:32:59.387801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.062 [2024-11-26 13:32:59.387829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.062 [2024-11-26 13:32:59.387839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.302 ms 00:20:11.062 [2024-11-26 13:32:59.387848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.062 [2024-11-26 13:32:59.387884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.062 [2024-11-26 13:32:59.387893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:11.062 [2024-11-26 13:32:59.387902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:11.062 [2024-11-26 13:32:59.387910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.062 [2024-11-26 13:32:59.387930] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:11.062 [2024-11-26 13:32:59.387950] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:11.062 [2024-11-26 13:32:59.387987] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:11.062 [2024-11-26 13:32:59.388003] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:11.062 [2024-11-26 13:32:59.388107] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:11.062 [2024-11-26 13:32:59.388119] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:11.062 [2024-11-26 13:32:59.388130] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:11.062 [2024-11-26 13:32:59.388144] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:11.062 [2024-11-26 13:32:59.388154] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:11.062 [2024-11-26 13:32:59.388163] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:11.062 [2024-11-26 13:32:59.388171] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:11.062 [2024-11-26 13:32:59.388180] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:11.062 [2024-11-26 13:32:59.388188] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:11.062 [2024-11-26 13:32:59.388196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.062 [2024-11-26 13:32:59.388205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:11.062 [2024-11-26 13:32:59.388213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:11.062 [2024-11-26 13:32:59.388221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.062 [2024-11-26 13:32:59.388308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.062 [2024-11-26 13:32:59.388320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:11.062 [2024-11-26 13:32:59.388328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:11.062 [2024-11-26 13:32:59.388336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.062 [2024-11-26 13:32:59.388462] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:11.062 [2024-11-26 13:32:59.388481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:11.062 [2024-11-26 13:32:59.388490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.062 [2024-11-26 13:32:59.388499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.062 [2024-11-26 13:32:59.388508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:11.062 [2024-11-26 13:32:59.388516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:11.062 [2024-11-26 13:32:59.388523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:11.062 [2024-11-26 13:32:59.388531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:11.062 [2024-11-26 13:32:59.388539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:11.062 [2024-11-26 13:32:59.388546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.062 [2024-11-26 13:32:59.388555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:11.062 [2024-11-26 13:32:59.388568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:11.063 [2024-11-26 13:32:59.388576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.063 [2024-11-26 13:32:59.388585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:11.063 [2024-11-26 13:32:59.388593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:11.063 [2024-11-26 13:32:59.388601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.063 [2024-11-26 13:32:59.388609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:11.063 [2024-11-26 13:32:59.388616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:11.063 [2024-11-26 13:32:59.388623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.063 [2024-11-26 13:32:59.388631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:11.063 [2024-11-26 13:32:59.388639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:11.063 [2024-11-26 13:32:59.388646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.063 [2024-11-26 13:32:59.388654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:11.063 [2024-11-26 13:32:59.388661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:11.063 [2024-11-26 13:32:59.388669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.063 [2024-11-26 13:32:59.388676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:11.063 [2024-11-26 13:32:59.388684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:11.063 [2024-11-26 13:32:59.388691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.063 [2024-11-26 13:32:59.388699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:11.063 [2024-11-26 13:32:59.388706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:11.063 [2024-11-26 13:32:59.388714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.063 [2024-11-26 13:32:59.388722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:11.063 [2024-11-26 13:32:59.388728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:11.063 [2024-11-26 13:32:59.388735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.063 [2024-11-26 13:32:59.388742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:11.063 [2024-11-26 13:32:59.388748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:11.063 [2024-11-26 13:32:59.388754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.063 [2024-11-26 13:32:59.388760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:11.063 [2024-11-26 13:32:59.388767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:11.063 [2024-11-26 13:32:59.388773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.063 [2024-11-26 13:32:59.388779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:11.063 [2024-11-26 13:32:59.388785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:11.063 [2024-11-26 13:32:59.388792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.063 [2024-11-26 13:32:59.388798] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:11.063 [2024-11-26 13:32:59.388806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:11.063 [2024-11-26 13:32:59.388815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.063 [2024-11-26 13:32:59.388822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.063 [2024-11-26 13:32:59.388830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:11.063 [2024-11-26 13:32:59.388836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:11.063 [2024-11-26 13:32:59.388843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:11.063 [2024-11-26 13:32:59.388849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:11.063 [2024-11-26 13:32:59.388856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:11.063 [2024-11-26 13:32:59.388862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:11.063 [2024-11-26 13:32:59.388870] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:11.063 [2024-11-26 13:32:59.388879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.063 [2024-11-26 13:32:59.388887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:11.063 [2024-11-26 13:32:59.388895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:11.063 [2024-11-26 13:32:59.388902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:11.063 [2024-11-26 13:32:59.388910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:11.063 [2024-11-26 13:32:59.388917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:11.063 [2024-11-26 13:32:59.388923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:11.063 [2024-11-26 13:32:59.388931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:11.063 [2024-11-26 13:32:59.388937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:11.063 [2024-11-26 13:32:59.388944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:11.063 [2024-11-26 13:32:59.388951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:11.063 [2024-11-26 13:32:59.388959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:11.063 [2024-11-26 13:32:59.388966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:11.063 [2024-11-26 13:32:59.388973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:11.063 [2024-11-26 13:32:59.388980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:11.063 [2024-11-26 13:32:59.388987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:11.063 [2024-11-26 13:32:59.388995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.063 [2024-11-26 13:32:59.389002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:11.063 [2024-11-26 13:32:59.389009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:11.063 [2024-11-26 13:32:59.389016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:11.063 [2024-11-26 13:32:59.389023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:11.063 [2024-11-26 13:32:59.389030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.063 [2024-11-26 13:32:59.389040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:11.063 [2024-11-26 13:32:59.389048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:20:11.063 [2024-11-26 13:32:59.389054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.063 [2024-11-26 13:32:59.415398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.063 [2024-11-26 13:32:59.415434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.063 [2024-11-26 13:32:59.415457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.280 ms 00:20:11.063 [2024-11-26 13:32:59.415465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.063 [2024-11-26 13:32:59.415583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.063 [2024-11-26 13:32:59.415593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:11.063 [2024-11-26 13:32:59.415602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:11.063 [2024-11-26 13:32:59.415609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.063 [2024-11-26 13:32:59.454002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.063 [2024-11-26 13:32:59.454046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.063 [2024-11-26 13:32:59.454060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.370 ms 00:20:11.063 [2024-11-26 13:32:59.454068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.063 [2024-11-26 13:32:59.454158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.063 [2024-11-26 13:32:59.454170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.063 [2024-11-26 13:32:59.454178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:11.063 [2024-11-26 13:32:59.454185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.063 [2024-11-26 13:32:59.454550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.063 [2024-11-26 13:32:59.454573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.063 [2024-11-26 13:32:59.454583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:20:11.063 [2024-11-26 13:32:59.454597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.063 [2024-11-26 13:32:59.454731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.063 [2024-11-26 13:32:59.454740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.063 [2024-11-26 13:32:59.454749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:20:11.063 [2024-11-26 13:32:59.454756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.063 [2024-11-26 13:32:59.468357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.063 [2024-11-26 13:32:59.468392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.063 [2024-11-26 13:32:59.468402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.581 ms 00:20:11.063 [2024-11-26 13:32:59.468409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.063 [2024-11-26 13:32:59.481333] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:11.063 [2024-11-26 13:32:59.481372] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:11.063 [2024-11-26 13:32:59.481384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.063 [2024-11-26 13:32:59.481392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:11.063 [2024-11-26 13:32:59.481401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.867 ms 00:20:11.063 [2024-11-26 13:32:59.481408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.064 [2024-11-26 13:32:59.505940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.064 [2024-11-26 13:32:59.505977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:11.064 [2024-11-26 13:32:59.505987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.451 ms 00:20:11.064 [2024-11-26 13:32:59.505994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.064 [2024-11-26 13:32:59.517671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.064 [2024-11-26 13:32:59.517706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:11.064 [2024-11-26 13:32:59.517716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.606 ms 00:20:11.064 [2024-11-26 13:32:59.517723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.064 [2024-11-26 13:32:59.529353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.064 [2024-11-26 13:32:59.529387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:11.064 [2024-11-26 13:32:59.529397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.566 ms 00:20:11.064 [2024-11-26 13:32:59.529404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.064 [2024-11-26 13:32:59.530022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.064 [2024-11-26 13:32:59.530047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:11.064 [2024-11-26 13:32:59.530056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:20:11.064 [2024-11-26 13:32:59.530064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.064 [2024-11-26 13:32:59.586360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.064 [2024-11-26 13:32:59.586421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:11.064 [2024-11-26 13:32:59.586436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.271 ms 00:20:11.064 [2024-11-26 13:32:59.586460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.064 [2024-11-26 13:32:59.597158] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:11.064 [2024-11-26 13:32:59.612646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.064 [2024-11-26 13:32:59.612697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:11.064 [2024-11-26 13:32:59.612713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.059 ms 00:20:11.064 [2024-11-26 13:32:59.612721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.064 [2024-11-26 13:32:59.612815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.064 [2024-11-26 13:32:59.612825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:11.064 [2024-11-26 13:32:59.612833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:11.064 [2024-11-26 13:32:59.612841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.064 [2024-11-26 13:32:59.612892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.064 [2024-11-26 13:32:59.612901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:11.064 [2024-11-26 13:32:59.612912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:11.064 [2024-11-26 13:32:59.612922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.064 [2024-11-26 13:32:59.612944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.064 [2024-11-26 13:32:59.612952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:11.064 [2024-11-26 13:32:59.612960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:11.064 [2024-11-26 13:32:59.612967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.064 [2024-11-26 13:32:59.612999] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:11.064 [2024-11-26 13:32:59.613009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.064 [2024-11-26 13:32:59.613017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:11.064 [2024-11-26 13:32:59.613024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:11.064 [2024-11-26 13:32:59.613032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.324 [2024-11-26 13:32:59.637858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.324 [2024-11-26 13:32:59.637912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:11.324 [2024-11-26 13:32:59.637925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.804 ms 00:20:11.324 [2024-11-26 13:32:59.637933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.324 [2024-11-26 13:32:59.638044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.324 [2024-11-26 13:32:59.638055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:11.324 [2024-11-26 13:32:59.638065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:11.324 [2024-11-26 13:32:59.638075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.324 [2024-11-26 13:32:59.638987] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.324 [2024-11-26 13:32:59.642060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.697 ms, result 0 00:20:11.324 [2024-11-26 13:32:59.643458] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.324 [2024-11-26 13:32:59.656664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.585  [2024-11-26T13:33:00.155Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-26 13:33:00.035385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.585 [2024-11-26 13:33:00.044798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.585 [2024-11-26 13:33:00.044845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:11.585 [2024-11-26 13:33:00.044865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.585 [2024-11-26 13:33:00.044873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.585 [2024-11-26 13:33:00.044896] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:11.585 [2024-11-26 13:33:00.047609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.585 [2024-11-26 13:33:00.047642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:11.585 [2024-11-26 13:33:00.047653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.699 ms 00:20:11.585 [2024-11-26 13:33:00.047661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.585 [2024-11-26 13:33:00.050404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.585 [2024-11-26 13:33:00.050452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:11.585 [2024-11-26 13:33:00.050462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:20:11.585 [2024-11-26 13:33:00.050469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.585 [2024-11-26 13:33:00.054653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.585 [2024-11-26 13:33:00.054696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:11.585 [2024-11-26 13:33:00.054705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.163 ms 00:20:11.585 [2024-11-26 13:33:00.054713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.585 [2024-11-26 13:33:00.061613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.585 [2024-11-26 13:33:00.061647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:11.585 [2024-11-26 13:33:00.061657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.873 ms 00:20:11.585 [2024-11-26 13:33:00.061664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.585 [2024-11-26 13:33:00.086699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.585 [2024-11-26 13:33:00.086744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:11.585 [2024-11-26 13:33:00.086756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.968 ms 00:20:11.585 [2024-11-26 13:33:00.086763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.585 [2024-11-26 13:33:00.101683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.585 [2024-11-26 13:33:00.101732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:11.585 [2024-11-26 13:33:00.101745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.837 ms 00:20:11.585 [2024-11-26 13:33:00.101752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.585 [2024-11-26 13:33:00.101901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.585 [2024-11-26 13:33:00.101912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:11.585 [2024-11-26 13:33:00.101930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:11.585 [2024-11-26 13:33:00.101938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.585 [2024-11-26 13:33:00.126680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.585 [2024-11-26 13:33:00.126728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:11.586 [2024-11-26 13:33:00.126740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.723 ms 00:20:11.586 [2024-11-26 13:33:00.126747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.586 [2024-11-26 13:33:00.150821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.586 [2024-11-26 13:33:00.150873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:11.586 [2024-11-26 13:33:00.150884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.029 ms 00:20:11.586 [2024-11-26 13:33:00.150891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.846 [2024-11-26 13:33:00.174848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.846 [2024-11-26 13:33:00.174896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:11.846 [2024-11-26 13:33:00.174910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.912 ms 00:20:11.846 [2024-11-26 13:33:00.174917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.846 [2024-11-26 13:33:00.198064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.846 [2024-11-26 13:33:00.198103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:11.846 [2024-11-26 13:33:00.198114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.078 ms 00:20:11.846 [2024-11-26 13:33:00.198122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.846 [2024-11-26 13:33:00.198161] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:11.846 [2024-11-26 13:33:00.198177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:11.846 [2024-11-26 13:33:00.198415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:11.847 [2024-11-26 13:33:00.198981] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:11.847 [2024-11-26 13:33:00.198989] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d09fa75c-4f30-4107-83fe-472028788725 00:20:11.847 [2024-11-26 13:33:00.198996] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:11.847 [2024-11-26 13:33:00.199004] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:11.847 [2024-11-26 13:33:00.199012] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:11.847 [2024-11-26 13:33:00.199020] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:11.847 [2024-11-26 13:33:00.199027] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:11.847 [2024-11-26 13:33:00.199037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:11.847 [2024-11-26 13:33:00.199045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:11.847 [2024-11-26 13:33:00.199051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:11.847 [2024-11-26 13:33:00.199057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:11.847 [2024-11-26 13:33:00.199064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.847 [2024-11-26 13:33:00.199072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:11.847 [2024-11-26 13:33:00.199081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:20:11.847 [2024-11-26 13:33:00.199088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.847 [2024-11-26 13:33:00.211924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.847 [2024-11-26 13:33:00.211960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:11.847 [2024-11-26 13:33:00.211971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.807 ms 00:20:11.847 [2024-11-26 13:33:00.211983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.847 [2024-11-26 13:33:00.212350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.847 [2024-11-26 13:33:00.212368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:11.847 [2024-11-26 13:33:00.212377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:20:11.848 [2024-11-26 13:33:00.212384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.249422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.249473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.848 [2024-11-26 13:33:00.249482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.249494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.249566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.249575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.848 [2024-11-26 13:33:00.249584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.249591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.249632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.249641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.848 [2024-11-26 13:33:00.249649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.249656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.249677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.249686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.848 [2024-11-26 13:33:00.249694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.249701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.329505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.329549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.848 [2024-11-26 13:33:00.329558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.329569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.393024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.393067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.848 [2024-11-26 13:33:00.393077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.393084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.393131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.393140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.848 [2024-11-26 13:33:00.393148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.393155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.393187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.393195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.848 [2024-11-26 13:33:00.393203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.393210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.393296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.393305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.848 [2024-11-26 13:33:00.393313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.393320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.393348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.393360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:11.848 [2024-11-26 13:33:00.393368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.393375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.393408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.393417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.848 [2024-11-26 13:33:00.393424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.393431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.393490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.848 [2024-11-26 13:33:00.393500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.848 [2024-11-26 13:33:00.393508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.848 [2024-11-26 13:33:00.393516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.848 [2024-11-26 13:33:00.393642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.843 ms, result 0 00:20:12.792 00:20:12.792 00:20:12.792 13:33:01 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76907 00:20:12.792 13:33:01 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76907 00:20:12.792 13:33:01 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:12.792 13:33:01 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76907 ']' 00:20:12.792 13:33:01 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.792 13:33:01 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.792 13:33:01 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.792 13:33:01 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.792 13:33:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:12.792 [2024-11-26 13:33:01.223536] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:20:12.792 [2024-11-26 13:33:01.223687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76907 ] 00:20:13.053 [2024-11-26 13:33:01.389429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.053 [2024-11-26 13:33:01.532859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.999 13:33:02 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.999 13:33:02 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:13.999 13:33:02 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:13.999 [2024-11-26 13:33:02.442615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.999 [2024-11-26 13:33:02.442701] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.263 [2024-11-26 13:33:02.621665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.263 [2024-11-26 13:33:02.621735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.263 [2024-11-26 13:33:02.621753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:14.263 [2024-11-26 13:33:02.621762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.263 [2024-11-26 13:33:02.624838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.263 [2024-11-26 13:33:02.624898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.263 [2024-11-26 13:33:02.624910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.052 ms 00:20:14.263 [2024-11-26 13:33:02.624919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.263 [2024-11-26 13:33:02.625049] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.263 [2024-11-26 13:33:02.625942] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.263 [2024-11-26 13:33:02.625996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.263 [2024-11-26 13:33:02.626006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.263 [2024-11-26 13:33:02.626018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:20:14.263 [2024-11-26 13:33:02.626029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.263 [2024-11-26 13:33:02.627952] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:14.263 [2024-11-26 13:33:02.642459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.263 [2024-11-26 13:33:02.642524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:14.263 [2024-11-26 13:33:02.642538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.516 ms 00:20:14.263 [2024-11-26 13:33:02.642549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.263 [2024-11-26 13:33:02.642669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.263 [2024-11-26 13:33:02.642684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:14.263 [2024-11-26 13:33:02.642693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:14.263 [2024-11-26 13:33:02.642704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.263 [2024-11-26 13:33:02.651231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.263 [2024-11-26 13:33:02.651288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.263 [2024-11-26 13:33:02.651299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.471 ms 00:20:14.263 [2024-11-26 13:33:02.651309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.263 [2024-11-26 13:33:02.651428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.263 [2024-11-26 13:33:02.651474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.263 [2024-11-26 13:33:02.651484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:14.263 [2024-11-26 13:33:02.651499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.263 [2024-11-26 13:33:02.651534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.263 [2024-11-26 13:33:02.651545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.263 [2024-11-26 13:33:02.651553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:14.263 [2024-11-26 13:33:02.651563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.263 [2024-11-26 13:33:02.651589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:14.263 [2024-11-26 13:33:02.655883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.263 [2024-11-26 13:33:02.655943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.263 [2024-11-26 13:33:02.655956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:20:14.263 [2024-11-26 13:33:02.655965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.263 [2024-11-26 13:33:02.656043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.263 [2024-11-26 13:33:02.656053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.264 [2024-11-26 13:33:02.656065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:14.264 [2024-11-26 13:33:02.656076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.264 [2024-11-26 13:33:02.656100] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:14.264 [2024-11-26 13:33:02.656120] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:14.264 [2024-11-26 13:33:02.656166] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:14.264 [2024-11-26 13:33:02.656183] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:14.264 [2024-11-26 13:33:02.656293] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:14.264 [2024-11-26 13:33:02.656305] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.264 [2024-11-26 13:33:02.656322] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:14.264 [2024-11-26 13:33:02.656333] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.264 [2024-11-26 13:33:02.656344] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.264 [2024-11-26 13:33:02.656355] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:14.264 [2024-11-26 13:33:02.656365] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.264 [2024-11-26 13:33:02.656373] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:14.264 [2024-11-26 13:33:02.656396] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:14.264 [2024-11-26 13:33:02.656405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.264 [2024-11-26 13:33:02.656415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.264 [2024-11-26 13:33:02.656424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:20:14.264 [2024-11-26 13:33:02.656433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.264 [2024-11-26 13:33:02.656545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.264 [2024-11-26 13:33:02.656557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.264 [2024-11-26 13:33:02.656565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:14.264 [2024-11-26 13:33:02.656575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.264 [2024-11-26 13:33:02.656680] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.264 [2024-11-26 13:33:02.656702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.264 [2024-11-26 13:33:02.656713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.264 [2024-11-26 13:33:02.656723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.264 [2024-11-26 13:33:02.656731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.264 [2024-11-26 13:33:02.656741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.264 [2024-11-26 13:33:02.656748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:14.264 [2024-11-26 13:33:02.656761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.264 [2024-11-26 13:33:02.656769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:14.264 [2024-11-26 13:33:02.656778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.264 [2024-11-26 13:33:02.656785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.264 [2024-11-26 13:33:02.656794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:14.264 [2024-11-26 13:33:02.656801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.264 [2024-11-26 13:33:02.656810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.264 [2024-11-26 13:33:02.656817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:14.264 [2024-11-26 13:33:02.656826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.264 [2024-11-26 13:33:02.656833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.264 [2024-11-26 13:33:02.656846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:14.264 [2024-11-26 13:33:02.656859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.264 [2024-11-26 13:33:02.656868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.264 [2024-11-26 13:33:02.656875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:14.264 [2024-11-26 13:33:02.656883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.264 [2024-11-26 13:33:02.656891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.264 [2024-11-26 13:33:02.656902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:14.264 [2024-11-26 13:33:02.656908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.264 [2024-11-26 13:33:02.656917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.264 [2024-11-26 13:33:02.656924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:14.264 [2024-11-26 13:33:02.656933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.264 [2024-11-26 13:33:02.656939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.264 [2024-11-26 13:33:02.656948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:14.264 [2024-11-26 13:33:02.656954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.264 [2024-11-26 13:33:02.656963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.264 [2024-11-26 13:33:02.656969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:14.264 [2024-11-26 13:33:02.656980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.264 [2024-11-26 13:33:02.656987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.264 [2024-11-26 13:33:02.656995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:14.264 [2024-11-26 13:33:02.657001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.264 [2024-11-26 13:33:02.657009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:14.264 [2024-11-26 13:33:02.657017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:14.264 [2024-11-26 13:33:02.657027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.264 [2024-11-26 13:33:02.657034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:14.264 [2024-11-26 13:33:02.657042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:14.264 [2024-11-26 13:33:02.657049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.264 [2024-11-26 13:33:02.657058] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.264 [2024-11-26 13:33:02.657069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.264 [2024-11-26 13:33:02.657078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.264 [2024-11-26 13:33:02.657085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.264 [2024-11-26 13:33:02.657095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.264 [2024-11-26 13:33:02.657103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.264 [2024-11-26 13:33:02.657114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.264 [2024-11-26 13:33:02.657122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.264 [2024-11-26 13:33:02.657131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.264 [2024-11-26 13:33:02.657139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.264 [2024-11-26 13:33:02.657151] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.264 [2024-11-26 13:33:02.657161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.264 [2024-11-26 13:33:02.657174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:14.264 [2024-11-26 13:33:02.657182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:14.264 [2024-11-26 13:33:02.657193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:14.264 [2024-11-26 13:33:02.657202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:14.264 [2024-11-26 13:33:02.657213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:14.264 [2024-11-26 13:33:02.657220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:14.264 [2024-11-26 13:33:02.657229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:14.264 [2024-11-26 13:33:02.657237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:14.264 [2024-11-26 13:33:02.657247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:14.264 [2024-11-26 13:33:02.657254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:14.264 [2024-11-26 13:33:02.657263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:14.264 [2024-11-26 13:33:02.657271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:14.264 [2024-11-26 13:33:02.657281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:14.265 [2024-11-26 13:33:02.657289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:14.265 [2024-11-26 13:33:02.657299] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.265 [2024-11-26 13:33:02.657308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.265 [2024-11-26 13:33:02.657320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.265 [2024-11-26 13:33:02.657328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.265 [2024-11-26 13:33:02.657338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.265 [2024-11-26 13:33:02.657346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.265 [2024-11-26 13:33:02.657356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.657363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.265 [2024-11-26 13:33:02.657373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:20:14.265 [2024-11-26 13:33:02.657383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.690134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.690191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.265 [2024-11-26 13:33:02.690206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.663 ms 00:20:14.265 [2024-11-26 13:33:02.690218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.690355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.690366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:14.265 [2024-11-26 13:33:02.690377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:14.265 [2024-11-26 13:33:02.690386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.725551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.725611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.265 [2024-11-26 13:33:02.725625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.138 ms 00:20:14.265 [2024-11-26 13:33:02.725634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.725728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.725738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.265 [2024-11-26 13:33:02.725749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.265 [2024-11-26 13:33:02.725758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.726337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.726387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.265 [2024-11-26 13:33:02.726399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:20:14.265 [2024-11-26 13:33:02.726407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.726583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.726594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.265 [2024-11-26 13:33:02.726605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:20:14.265 [2024-11-26 13:33:02.726613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.745111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.745161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.265 [2024-11-26 13:33:02.745175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.471 ms 00:20:14.265 [2024-11-26 13:33:02.745183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.759998] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:14.265 [2024-11-26 13:33:02.760069] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.265 [2024-11-26 13:33:02.760086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.760094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.265 [2024-11-26 13:33:02.760107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.778 ms 00:20:14.265 [2024-11-26 13:33:02.760122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.786431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.786492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.265 [2024-11-26 13:33:02.786509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.205 ms 00:20:14.265 [2024-11-26 13:33:02.786517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.799402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.799470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.265 [2024-11-26 13:33:02.799488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.777 ms 00:20:14.265 [2024-11-26 13:33:02.799495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.812081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.812127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.265 [2024-11-26 13:33:02.812141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.488 ms 00:20:14.265 [2024-11-26 13:33:02.812149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.265 [2024-11-26 13:33:02.812878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.265 [2024-11-26 13:33:02.812907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.265 [2024-11-26 13:33:02.812919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:20:14.265 [2024-11-26 13:33:02.812927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.527 [2024-11-26 13:33:02.901183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.527 [2024-11-26 13:33:02.901259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.527 [2024-11-26 13:33:02.901282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.222 ms 00:20:14.527 [2024-11-26 13:33:02.901292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.527 [2024-11-26 13:33:02.913074] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:14.527 [2024-11-26 13:33:02.933605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.527 [2024-11-26 13:33:02.933672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.527 [2024-11-26 13:33:02.933689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.191 ms 00:20:14.527 [2024-11-26 13:33:02.933700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.527 [2024-11-26 13:33:02.933799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.527 [2024-11-26 13:33:02.933814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.527 [2024-11-26 13:33:02.933823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:14.527 [2024-11-26 13:33:02.933834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.527 [2024-11-26 13:33:02.933893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.527 [2024-11-26 13:33:02.933904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.527 [2024-11-26 13:33:02.933913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:14.527 [2024-11-26 13:33:02.933926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.527 [2024-11-26 13:33:02.933952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.527 [2024-11-26 13:33:02.933963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.527 [2024-11-26 13:33:02.933971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:14.527 [2024-11-26 13:33:02.933983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.527 [2024-11-26 13:33:02.934019] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.527 [2024-11-26 13:33:02.934033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.527 [2024-11-26 13:33:02.934043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.527 [2024-11-26 13:33:02.934053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:14.527 [2024-11-26 13:33:02.934061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.527 [2024-11-26 13:33:02.960473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.527 [2024-11-26 13:33:02.960528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.527 [2024-11-26 13:33:02.960545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.378 ms 00:20:14.527 [2024-11-26 13:33:02.960554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.527 [2024-11-26 13:33:02.960677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.527 [2024-11-26 13:33:02.960688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.527 [2024-11-26 13:33:02.960703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:14.527 [2024-11-26 13:33:02.960712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.527 [2024-11-26 13:33:02.961997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.527 [2024-11-26 13:33:02.965490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 339.998 ms, result 0 00:20:14.527 [2024-11-26 13:33:02.967711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.527 Some configs were skipped because the RPC state that can call them passed over. 00:20:14.527 13:33:03 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:14.789 [2024-11-26 13:33:03.208478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.790 [2024-11-26 13:33:03.208551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:14.790 [2024-11-26 13:33:03.208566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:20:14.790 [2024-11-26 13:33:03.208579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.790 [2024-11-26 13:33:03.208617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.372 ms, result 0 00:20:14.790 true 00:20:14.790 13:33:03 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:15.051 [2024-11-26 13:33:03.412231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.051 [2024-11-26 13:33:03.412291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:15.051 [2024-11-26 13:33:03.412306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.739 ms 00:20:15.051 [2024-11-26 13:33:03.412314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.051 [2024-11-26 13:33:03.412354] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.871 ms, result 0 00:20:15.051 true 00:20:15.051 13:33:03 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76907 00:20:15.051 13:33:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76907 ']' 00:20:15.051 13:33:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76907 00:20:15.051 13:33:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:15.051 13:33:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.051 13:33:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76907 00:20:15.051 13:33:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.051 killing process with pid 76907 00:20:15.051 13:33:03 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.051 13:33:03 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76907' 00:20:15.051 13:33:03 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76907 00:20:15.051 13:33:03 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76907 00:20:15.625 [2024-11-26 13:33:04.152726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.625 [2024-11-26 13:33:04.152778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.625 [2024-11-26 13:33:04.152789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.625 [2024-11-26 13:33:04.152796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.625 [2024-11-26 13:33:04.152816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.625 [2024-11-26 13:33:04.154977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.625 [2024-11-26 13:33:04.155006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.625 [2024-11-26 13:33:04.155017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:20:15.625 [2024-11-26 13:33:04.155024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.625 [2024-11-26 13:33:04.155263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.625 [2024-11-26 13:33:04.155280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.625 [2024-11-26 13:33:04.155289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:20:15.625 [2024-11-26 13:33:04.155294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.625 [2024-11-26 13:33:04.158590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.625 [2024-11-26 13:33:04.158614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.625 [2024-11-26 13:33:04.158624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 00:20:15.625 [2024-11-26 13:33:04.158631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.625 [2024-11-26 13:33:04.163828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.625 [2024-11-26 13:33:04.163855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.625 [2024-11-26 13:33:04.163864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.168 ms 00:20:15.625 [2024-11-26 13:33:04.163870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.625 [2024-11-26 13:33:04.170991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.625 [2024-11-26 13:33:04.171024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.625 [2024-11-26 13:33:04.171034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.063 ms 00:20:15.625 [2024-11-26 13:33:04.171040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.625 [2024-11-26 13:33:04.177353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.625 [2024-11-26 13:33:04.177385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.625 [2024-11-26 13:33:04.177394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.280 ms 00:20:15.625 [2024-11-26 13:33:04.177401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.626 [2024-11-26 13:33:04.177519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.626 [2024-11-26 13:33:04.177529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.626 [2024-11-26 13:33:04.177536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:15.626 [2024-11-26 13:33:04.177542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.626 [2024-11-26 13:33:04.185229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.626 [2024-11-26 13:33:04.185256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.626 [2024-11-26 13:33:04.185264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.670 ms 00:20:15.626 [2024-11-26 13:33:04.185269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.888 [2024-11-26 13:33:04.192394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.888 [2024-11-26 13:33:04.192421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.889 [2024-11-26 13:33:04.192432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.083 ms 00:20:15.889 [2024-11-26 13:33:04.192437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.889 [2024-11-26 13:33:04.199194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.889 [2024-11-26 13:33:04.199221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.889 [2024-11-26 13:33:04.199229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.711 ms 00:20:15.889 [2024-11-26 13:33:04.199234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.889 [2024-11-26 13:33:04.207077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.889 [2024-11-26 13:33:04.207106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.889 [2024-11-26 13:33:04.207114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.942 ms 00:20:15.889 [2024-11-26 13:33:04.207119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.889 [2024-11-26 13:33:04.207155] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.889 [2024-11-26 13:33:04.207166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.889 [2024-11-26 13:33:04.207637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.890 [2024-11-26 13:33:04.207840] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.890 [2024-11-26 13:33:04.207850] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d09fa75c-4f30-4107-83fe-472028788725 00:20:15.890 [2024-11-26 13:33:04.207859] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.890 [2024-11-26 13:33:04.207866] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.890 [2024-11-26 13:33:04.207871] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.890 [2024-11-26 13:33:04.207878] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.890 [2024-11-26 13:33:04.207883] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.890 [2024-11-26 13:33:04.207890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.890 [2024-11-26 13:33:04.207896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.890 [2024-11-26 13:33:04.207902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.890 [2024-11-26 13:33:04.207907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.890 [2024-11-26 13:33:04.207913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.890 [2024-11-26 13:33:04.207919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.890 [2024-11-26 13:33:04.207927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:20:15.890 [2024-11-26 13:33:04.207932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.217449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.890 [2024-11-26 13:33:04.217477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.890 [2024-11-26 13:33:04.217488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.482 ms 00:20:15.890 [2024-11-26 13:33:04.217493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.217773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.890 [2024-11-26 13:33:04.217789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.890 [2024-11-26 13:33:04.217799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:20:15.890 [2024-11-26 13:33:04.217805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.252964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.252995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.890 [2024-11-26 13:33:04.253005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.253011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.253081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.253089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.890 [2024-11-26 13:33:04.253098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.253104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.253138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.253145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.890 [2024-11-26 13:33:04.253153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.253158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.253173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.253179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.890 [2024-11-26 13:33:04.253186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.253193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.313587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.313622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.890 [2024-11-26 13:33:04.313631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.313638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.362450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.362485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.890 [2024-11-26 13:33:04.362494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.362503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.362560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.362568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.890 [2024-11-26 13:33:04.362577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.362583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.362606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.362612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.890 [2024-11-26 13:33:04.362619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.362625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.362694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.362702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.890 [2024-11-26 13:33:04.362709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.362715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.362743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.362750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.890 [2024-11-26 13:33:04.362757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.362763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.362795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.362801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.890 [2024-11-26 13:33:04.362810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.362816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.362858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.890 [2024-11-26 13:33:04.362866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.890 [2024-11-26 13:33:04.362873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.890 [2024-11-26 13:33:04.362878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.890 [2024-11-26 13:33:04.362983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.238 ms, result 0 00:20:16.463 13:33:04 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:16.463 [2024-11-26 13:33:04.933539] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:20:16.463 [2024-11-26 13:33:04.933653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76963 ] 00:20:16.725 [2024-11-26 13:33:05.088844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.725 [2024-11-26 13:33:05.165489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.988 [2024-11-26 13:33:05.372255] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.988 [2024-11-26 13:33:05.372308] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.988 [2024-11-26 13:33:05.525750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.525811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:16.988 [2024-11-26 13:33:05.525824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:16.988 [2024-11-26 13:33:05.525832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.528507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.528547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.988 [2024-11-26 13:33:05.528557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.656 ms 00:20:16.988 [2024-11-26 13:33:05.528564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.528637] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:16.988 [2024-11-26 13:33:05.529298] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:16.988 [2024-11-26 13:33:05.529323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.529330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.988 [2024-11-26 13:33:05.529339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:20:16.988 [2024-11-26 13:33:05.529346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.530644] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:16.988 [2024-11-26 13:33:05.543228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.543268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:16.988 [2024-11-26 13:33:05.543280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.584 ms 00:20:16.988 [2024-11-26 13:33:05.543288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.543388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.543400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:16.988 [2024-11-26 13:33:05.543408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:16.988 [2024-11-26 13:33:05.543416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.548545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.548579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.988 [2024-11-26 13:33:05.548589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.075 ms 00:20:16.988 [2024-11-26 13:33:05.548596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.548683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.548693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.988 [2024-11-26 13:33:05.548702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:16.988 [2024-11-26 13:33:05.548709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.548737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.548746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:16.988 [2024-11-26 13:33:05.548754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:16.988 [2024-11-26 13:33:05.548761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.548782] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:16.988 [2024-11-26 13:33:05.552125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.552155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.988 [2024-11-26 13:33:05.552164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.349 ms 00:20:16.988 [2024-11-26 13:33:05.552172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.552207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.552216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:16.988 [2024-11-26 13:33:05.552224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:16.988 [2024-11-26 13:33:05.552231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.552251] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:16.988 [2024-11-26 13:33:05.552269] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:16.988 [2024-11-26 13:33:05.552302] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:16.988 [2024-11-26 13:33:05.552318] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:16.988 [2024-11-26 13:33:05.552420] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:16.988 [2024-11-26 13:33:05.552437] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:16.988 [2024-11-26 13:33:05.552468] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:16.988 [2024-11-26 13:33:05.552481] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:16.988 [2024-11-26 13:33:05.552490] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:16.988 [2024-11-26 13:33:05.552498] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:16.988 [2024-11-26 13:33:05.552505] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:16.988 [2024-11-26 13:33:05.552512] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:16.988 [2024-11-26 13:33:05.552520] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:16.988 [2024-11-26 13:33:05.552527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.552535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:16.988 [2024-11-26 13:33:05.552542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:20:16.988 [2024-11-26 13:33:05.552549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.988 [2024-11-26 13:33:05.552636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.988 [2024-11-26 13:33:05.552653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:16.988 [2024-11-26 13:33:05.552660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:16.988 [2024-11-26 13:33:05.552667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.989 [2024-11-26 13:33:05.552782] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:16.989 [2024-11-26 13:33:05.552793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:16.989 [2024-11-26 13:33:05.552801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.989 [2024-11-26 13:33:05.552810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.989 [2024-11-26 13:33:05.552817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:16.989 [2024-11-26 13:33:05.552823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:16.989 [2024-11-26 13:33:05.552830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:16.989 [2024-11-26 13:33:05.552837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:16.989 [2024-11-26 13:33:05.552843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:16.989 [2024-11-26 13:33:05.552850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.989 [2024-11-26 13:33:05.552856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:16.989 [2024-11-26 13:33:05.552868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:16.989 [2024-11-26 13:33:05.552875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:16.989 [2024-11-26 13:33:05.552881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:16.989 [2024-11-26 13:33:05.552887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:16.989 [2024-11-26 13:33:05.552894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.989 [2024-11-26 13:33:05.552900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:16.989 [2024-11-26 13:33:05.552907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:16.989 [2024-11-26 13:33:05.552913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.989 [2024-11-26 13:33:05.552919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:16.989 [2024-11-26 13:33:05.552925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:16.989 [2024-11-26 13:33:05.552932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.989 [2024-11-26 13:33:05.552938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:16.989 [2024-11-26 13:33:05.552944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:16.989 [2024-11-26 13:33:05.552950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.989 [2024-11-26 13:33:05.552956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:16.989 [2024-11-26 13:33:05.552962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:16.989 [2024-11-26 13:33:05.552970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.989 [2024-11-26 13:33:05.552977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:16.989 [2024-11-26 13:33:05.552983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:16.989 [2024-11-26 13:33:05.552990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:16.989 [2024-11-26 13:33:05.552996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:16.989 [2024-11-26 13:33:05.553002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:16.989 [2024-11-26 13:33:05.553009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.989 [2024-11-26 13:33:05.553015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:16.989 [2024-11-26 13:33:05.553022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:16.989 [2024-11-26 13:33:05.553028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:16.989 [2024-11-26 13:33:05.553034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:16.989 [2024-11-26 13:33:05.553041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:16.989 [2024-11-26 13:33:05.553047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.989 [2024-11-26 13:33:05.553053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:16.989 [2024-11-26 13:33:05.553059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:16.989 [2024-11-26 13:33:05.553065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.989 [2024-11-26 13:33:05.553071] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:16.989 [2024-11-26 13:33:05.553078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:16.989 [2024-11-26 13:33:05.553087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:16.989 [2024-11-26 13:33:05.553093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:16.989 [2024-11-26 13:33:05.553100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:16.989 [2024-11-26 13:33:05.553107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:16.989 [2024-11-26 13:33:05.553114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:16.989 [2024-11-26 13:33:05.553120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:16.989 [2024-11-26 13:33:05.553126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:16.989 [2024-11-26 13:33:05.553133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:16.989 [2024-11-26 13:33:05.553141] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:16.989 [2024-11-26 13:33:05.553150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.252 [2024-11-26 13:33:05.553158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.252 [2024-11-26 13:33:05.553165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.252 [2024-11-26 13:33:05.553172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.252 [2024-11-26 13:33:05.553179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.252 [2024-11-26 13:33:05.553186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.252 [2024-11-26 13:33:05.553193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.252 [2024-11-26 13:33:05.553200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.252 [2024-11-26 13:33:05.553207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.252 [2024-11-26 13:33:05.553214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.252 [2024-11-26 13:33:05.553221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.252 [2024-11-26 13:33:05.553228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.252 [2024-11-26 13:33:05.553235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.252 [2024-11-26 13:33:05.553243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.252 [2024-11-26 13:33:05.553250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.252 [2024-11-26 13:33:05.553256] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.252 [2024-11-26 13:33:05.553264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.252 [2024-11-26 13:33:05.553271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.252 [2024-11-26 13:33:05.553279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.252 [2024-11-26 13:33:05.553285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.252 [2024-11-26 13:33:05.553292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.252 [2024-11-26 13:33:05.553299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.252 [2024-11-26 13:33:05.553309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.252 [2024-11-26 13:33:05.553317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:20:17.252 [2024-11-26 13:33:05.553323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.252 [2024-11-26 13:33:05.578900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.252 [2024-11-26 13:33:05.578935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.252 [2024-11-26 13:33:05.578945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.513 ms 00:20:17.252 [2024-11-26 13:33:05.578952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.252 [2024-11-26 13:33:05.579071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.252 [2024-11-26 13:33:05.579081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.252 [2024-11-26 13:33:05.579090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:17.252 [2024-11-26 13:33:05.579097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.252 [2024-11-26 13:33:05.621697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.252 [2024-11-26 13:33:05.621737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.252 [2024-11-26 13:33:05.621751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.579 ms 00:20:17.252 [2024-11-26 13:33:05.621759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.252 [2024-11-26 13:33:05.621846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.252 [2024-11-26 13:33:05.621858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.252 [2024-11-26 13:33:05.621867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:17.252 [2024-11-26 13:33:05.621875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.252 [2024-11-26 13:33:05.622183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.252 [2024-11-26 13:33:05.622212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.252 [2024-11-26 13:33:05.622222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:17.253 [2024-11-26 13:33:05.622235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.622361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.622380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.253 [2024-11-26 13:33:05.622388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:17.253 [2024-11-26 13:33:05.622396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.635663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.635695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.253 [2024-11-26 13:33:05.635705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.247 ms 00:20:17.253 [2024-11-26 13:33:05.635713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.648063] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:17.253 [2024-11-26 13:33:05.648099] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.253 [2024-11-26 13:33:05.648110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.648118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.253 [2024-11-26 13:33:05.648127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.304 ms 00:20:17.253 [2024-11-26 13:33:05.648134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.672200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.672246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.253 [2024-11-26 13:33:05.672256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.996 ms 00:20:17.253 [2024-11-26 13:33:05.672265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.683531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.683564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.253 [2024-11-26 13:33:05.683573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.199 ms 00:20:17.253 [2024-11-26 13:33:05.683580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.694865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.694896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.253 [2024-11-26 13:33:05.694905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.224 ms 00:20:17.253 [2024-11-26 13:33:05.694912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.695536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.695561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.253 [2024-11-26 13:33:05.695570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:20:17.253 [2024-11-26 13:33:05.695577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.749395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.749437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.253 [2024-11-26 13:33:05.749458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.795 ms 00:20:17.253 [2024-11-26 13:33:05.749467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.760132] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:17.253 [2024-11-26 13:33:05.773877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.773912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.253 [2024-11-26 13:33:05.773924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.317 ms 00:20:17.253 [2024-11-26 13:33:05.773936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.774013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.774024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.253 [2024-11-26 13:33:05.774032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:17.253 [2024-11-26 13:33:05.774040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.774084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.774093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.253 [2024-11-26 13:33:05.774101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:17.253 [2024-11-26 13:33:05.774111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.774137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.774145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.253 [2024-11-26 13:33:05.774153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:17.253 [2024-11-26 13:33:05.774160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.774187] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.253 [2024-11-26 13:33:05.774196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.774204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.253 [2024-11-26 13:33:05.774211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:17.253 [2024-11-26 13:33:05.774218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.797311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.797344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.253 [2024-11-26 13:33:05.797355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.071 ms 00:20:17.253 [2024-11-26 13:33:05.797364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.797476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.253 [2024-11-26 13:33:05.797488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.253 [2024-11-26 13:33:05.797496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:17.253 [2024-11-26 13:33:05.797504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.253 [2024-11-26 13:33:05.798405] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.253 [2024-11-26 13:33:05.801542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.390 ms, result 0 00:20:17.253 [2024-11-26 13:33:05.802172] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.253 [2024-11-26 13:33:05.815110] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.638  [2024-11-26T13:33:08.153Z] Copying: 20/256 [MB] (20 MBps) [2024-11-26T13:33:09.096Z] Copying: 42/256 [MB] (22 MBps) [2024-11-26T13:33:10.056Z] Copying: 61/256 [MB] (18 MBps) [2024-11-26T13:33:10.999Z] Copying: 77/256 [MB] (16 MBps) [2024-11-26T13:33:11.943Z] Copying: 102/256 [MB] (25 MBps) [2024-11-26T13:33:12.886Z] Copying: 116/256 [MB] (14 MBps) [2024-11-26T13:33:14.274Z] Copying: 133/256 [MB] (16 MBps) [2024-11-26T13:33:15.219Z] Copying: 147/256 [MB] (13 MBps) [2024-11-26T13:33:16.164Z] Copying: 159/256 [MB] (12 MBps) [2024-11-26T13:33:17.111Z] Copying: 170/256 [MB] (11 MBps) [2024-11-26T13:33:18.059Z] Copying: 182/256 [MB] (11 MBps) [2024-11-26T13:33:19.004Z] Copying: 203/256 [MB] (21 MBps) [2024-11-26T13:33:19.950Z] Copying: 214/256 [MB] (11 MBps) [2024-11-26T13:33:20.894Z] Copying: 229/256 [MB] (14 MBps) [2024-11-26T13:33:21.465Z] Copying: 245/256 [MB] (16 MBps) [2024-11-26T13:33:22.038Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-26 13:33:21.794750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.468 [2024-11-26 13:33:21.806232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.468 [2024-11-26 13:33:21.806291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:33.468 [2024-11-26 13:33:21.806307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:33.468 [2024-11-26 13:33:21.806328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.468 [2024-11-26 13:33:21.806357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:33.468 [2024-11-26 13:33:21.809421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.468 [2024-11-26 13:33:21.809480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:33.468 [2024-11-26 13:33:21.809495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.047 ms 00:20:33.468 [2024-11-26 13:33:21.809505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.468 [2024-11-26 13:33:21.809805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.468 [2024-11-26 13:33:21.809818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:33.468 [2024-11-26 13:33:21.809828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:20:33.468 [2024-11-26 13:33:21.809838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.468 [2024-11-26 13:33:21.813581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.468 [2024-11-26 13:33:21.813617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:33.468 [2024-11-26 13:33:21.813627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:20:33.468 [2024-11-26 13:33:21.813642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.468 [2024-11-26 13:33:21.821306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.468 [2024-11-26 13:33:21.821358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:33.468 [2024-11-26 13:33:21.821371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.642 ms 00:20:33.468 [2024-11-26 13:33:21.821657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.468 [2024-11-26 13:33:21.849148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.468 [2024-11-26 13:33:21.849209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:33.469 [2024-11-26 13:33:21.849224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.402 ms 00:20:33.469 [2024-11-26 13:33:21.849234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-26 13:33:21.865948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-26 13:33:21.866004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:33.469 [2024-11-26 13:33:21.866024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.638 ms 00:20:33.469 [2024-11-26 13:33:21.866033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-26 13:33:21.866204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-26 13:33:21.866217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:33.469 [2024-11-26 13:33:21.866239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:33.469 [2024-11-26 13:33:21.866248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-26 13:33:21.892418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-26 13:33:21.892492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:33.469 [2024-11-26 13:33:21.892507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.152 ms 00:20:33.469 [2024-11-26 13:33:21.892516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-26 13:33:21.918675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-26 13:33:21.918730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:33.469 [2024-11-26 13:33:21.918744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.087 ms 00:20:33.469 [2024-11-26 13:33:21.918752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-26 13:33:21.943575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-26 13:33:21.943630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:33.469 [2024-11-26 13:33:21.943642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.752 ms 00:20:33.469 [2024-11-26 13:33:21.943651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-26 13:33:21.968482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.469 [2024-11-26 13:33:21.968536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:33.469 [2024-11-26 13:33:21.968549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.746 ms 00:20:33.469 [2024-11-26 13:33:21.968557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.469 [2024-11-26 13:33:21.968609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:33.469 [2024-11-26 13:33:21.968627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.968993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.969001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.969008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.969016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.969024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.969032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.969040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:33.469 [2024-11-26 13:33:21.969048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:33.470 [2024-11-26 13:33:21.969492] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:33.470 [2024-11-26 13:33:21.969501] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d09fa75c-4f30-4107-83fe-472028788725 00:20:33.470 [2024-11-26 13:33:21.969509] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:33.470 [2024-11-26 13:33:21.969518] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:33.470 [2024-11-26 13:33:21.969526] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:33.470 [2024-11-26 13:33:21.969536] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:33.470 [2024-11-26 13:33:21.969544] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:33.470 [2024-11-26 13:33:21.969552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:33.470 [2024-11-26 13:33:21.969560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:33.470 [2024-11-26 13:33:21.969566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:33.470 [2024-11-26 13:33:21.969573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:33.470 [2024-11-26 13:33:21.969580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.470 [2024-11-26 13:33:21.969592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:33.470 [2024-11-26 13:33:21.969604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:20:33.470 [2024-11-26 13:33:21.969612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.470 [2024-11-26 13:33:21.983195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.470 [2024-11-26 13:33:21.983243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:33.470 [2024-11-26 13:33:21.983256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.542 ms 00:20:33.470 [2024-11-26 13:33:21.983265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.470 [2024-11-26 13:33:21.983734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.470 [2024-11-26 13:33:21.983755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:33.470 [2024-11-26 13:33:21.983766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:20:33.470 [2024-11-26 13:33:21.983774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.470 [2024-11-26 13:33:22.023163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.470 [2024-11-26 13:33:22.023223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.470 [2024-11-26 13:33:22.023235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.470 [2024-11-26 13:33:22.023243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.470 [2024-11-26 13:33:22.023368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.471 [2024-11-26 13:33:22.023378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.471 [2024-11-26 13:33:22.023388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.471 [2024-11-26 13:33:22.023396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.471 [2024-11-26 13:33:22.023473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.471 [2024-11-26 13:33:22.023484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.471 [2024-11-26 13:33:22.023493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.471 [2024-11-26 13:33:22.023501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.471 [2024-11-26 13:33:22.023523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.471 [2024-11-26 13:33:22.023533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.471 [2024-11-26 13:33:22.023541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.471 [2024-11-26 13:33:22.023548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.732 [2024-11-26 13:33:22.108355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.732 [2024-11-26 13:33:22.108424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.732 [2024-11-26 13:33:22.108438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.732 [2024-11-26 13:33:22.108457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.732 [2024-11-26 13:33:22.178064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.732 [2024-11-26 13:33:22.178132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.732 [2024-11-26 13:33:22.178146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.732 [2024-11-26 13:33:22.178155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.732 [2024-11-26 13:33:22.178246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.732 [2024-11-26 13:33:22.178257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.732 [2024-11-26 13:33:22.178266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.732 [2024-11-26 13:33:22.178275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.732 [2024-11-26 13:33:22.178310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.732 [2024-11-26 13:33:22.178320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.732 [2024-11-26 13:33:22.178335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.732 [2024-11-26 13:33:22.178344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.732 [2024-11-26 13:33:22.178473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.732 [2024-11-26 13:33:22.178485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.732 [2024-11-26 13:33:22.178495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.732 [2024-11-26 13:33:22.178503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.732 [2024-11-26 13:33:22.178540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.732 [2024-11-26 13:33:22.178550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:33.732 [2024-11-26 13:33:22.178563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.732 [2024-11-26 13:33:22.178571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.732 [2024-11-26 13:33:22.178618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.732 [2024-11-26 13:33:22.178627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.732 [2024-11-26 13:33:22.178635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.732 [2024-11-26 13:33:22.178643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.732 [2024-11-26 13:33:22.178692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.732 [2024-11-26 13:33:22.178703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.732 [2024-11-26 13:33:22.178715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.732 [2024-11-26 13:33:22.178723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.732 [2024-11-26 13:33:22.178899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.647 ms, result 0 00:20:34.673 00:20:34.673 00:20:34.673 13:33:22 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:34.934 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:34.934 13:33:23 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:34.934 13:33:23 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:34.934 13:33:23 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:35.196 13:33:23 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:35.196 13:33:23 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:35.196 13:33:23 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:35.196 13:33:23 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76907 00:20:35.196 13:33:23 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76907 ']' 00:20:35.196 13:33:23 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76907 00:20:35.196 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76907) - No such process 00:20:35.196 Process with pid 76907 is not found 00:20:35.196 13:33:23 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76907 is not found' 00:20:35.196 ************************************ 00:20:35.196 END TEST ftl_trim 00:20:35.196 ************************************ 00:20:35.196 00:20:35.196 real 1m11.356s 00:20:35.196 user 1m27.556s 00:20:35.196 sys 0m13.666s 00:20:35.196 13:33:23 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.196 13:33:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:35.196 13:33:23 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:35.196 13:33:23 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:35.196 13:33:23 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.196 13:33:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:35.196 ************************************ 00:20:35.196 START TEST ftl_restore 00:20:35.196 ************************************ 00:20:35.196 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:35.196 * Looking for test storage... 00:20:35.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.196 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:35.196 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:35.196 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.458 13:33:23 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:35.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.458 --rc genhtml_branch_coverage=1 00:20:35.458 --rc genhtml_function_coverage=1 00:20:35.458 --rc genhtml_legend=1 00:20:35.458 --rc geninfo_all_blocks=1 00:20:35.458 --rc geninfo_unexecuted_blocks=1 00:20:35.458 00:20:35.458 ' 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:35.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.458 --rc genhtml_branch_coverage=1 00:20:35.458 --rc genhtml_function_coverage=1 00:20:35.458 --rc genhtml_legend=1 00:20:35.458 --rc geninfo_all_blocks=1 00:20:35.458 --rc geninfo_unexecuted_blocks=1 00:20:35.458 00:20:35.458 ' 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:35.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.458 --rc genhtml_branch_coverage=1 00:20:35.458 --rc genhtml_function_coverage=1 00:20:35.458 --rc genhtml_legend=1 00:20:35.458 --rc geninfo_all_blocks=1 00:20:35.458 --rc geninfo_unexecuted_blocks=1 00:20:35.458 00:20:35.458 ' 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:35.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.458 --rc genhtml_branch_coverage=1 00:20:35.458 --rc genhtml_function_coverage=1 00:20:35.458 --rc genhtml_legend=1 00:20:35.458 --rc geninfo_all_blocks=1 00:20:35.458 --rc geninfo_unexecuted_blocks=1 00:20:35.458 00:20:35.458 ' 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.fB1oAUsILK 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77219 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77219 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77219 ']' 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.458 13:33:23 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:35.458 13:33:23 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.458 [2024-11-26 13:33:23.934213] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:20:35.458 [2024-11-26 13:33:23.934369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77219 ] 00:20:35.719 [2024-11-26 13:33:24.096669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.719 [2024-11-26 13:33:24.233326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.665 13:33:24 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.665 13:33:24 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:36.665 13:33:24 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:36.665 13:33:24 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:36.665 13:33:24 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:36.665 13:33:24 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:36.665 13:33:24 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:36.665 13:33:24 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:36.927 13:33:25 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:36.927 13:33:25 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:36.927 13:33:25 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:36.927 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:36.927 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:36.927 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:36.927 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:36.927 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:36.927 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:36.927 { 00:20:36.927 "name": "nvme0n1", 00:20:36.927 "aliases": [ 00:20:36.927 "df7167a6-0df1-4f87-b830-290e781aacf0" 00:20:36.927 ], 00:20:36.927 "product_name": "NVMe disk", 00:20:36.927 "block_size": 4096, 00:20:36.927 "num_blocks": 1310720, 00:20:36.927 "uuid": "df7167a6-0df1-4f87-b830-290e781aacf0", 00:20:36.927 "numa_id": -1, 00:20:36.927 "assigned_rate_limits": { 00:20:36.927 "rw_ios_per_sec": 0, 00:20:36.927 "rw_mbytes_per_sec": 0, 00:20:36.927 "r_mbytes_per_sec": 0, 00:20:36.927 "w_mbytes_per_sec": 0 00:20:36.927 }, 00:20:36.927 "claimed": true, 00:20:36.927 "claim_type": "read_many_write_one", 00:20:36.927 "zoned": false, 00:20:36.927 "supported_io_types": { 00:20:36.927 "read": true, 00:20:36.927 "write": true, 00:20:36.927 "unmap": true, 00:20:36.927 "flush": true, 00:20:36.927 "reset": true, 00:20:36.927 "nvme_admin": true, 00:20:36.927 "nvme_io": true, 00:20:36.927 "nvme_io_md": false, 00:20:36.927 "write_zeroes": true, 00:20:36.927 "zcopy": false, 00:20:36.927 "get_zone_info": false, 00:20:36.927 "zone_management": false, 00:20:36.927 "zone_append": false, 00:20:36.927 "compare": true, 00:20:36.927 "compare_and_write": false, 00:20:36.927 "abort": true, 00:20:36.927 "seek_hole": false, 00:20:36.927 "seek_data": false, 00:20:36.927 "copy": true, 00:20:36.927 "nvme_iov_md": false 00:20:36.927 }, 00:20:36.927 "driver_specific": { 00:20:36.927 "nvme": [ 00:20:36.927 { 00:20:36.927 "pci_address": "0000:00:11.0", 00:20:36.927 "trid": { 00:20:36.927 "trtype": "PCIe", 00:20:36.927 "traddr": "0000:00:11.0" 00:20:36.927 }, 00:20:36.927 "ctrlr_data": { 00:20:36.927 "cntlid": 0, 00:20:36.927 "vendor_id": "0x1b36", 00:20:36.927 "model_number": "QEMU NVMe Ctrl", 00:20:36.927 "serial_number": "12341", 00:20:36.927 "firmware_revision": "8.0.0", 00:20:36.927 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:36.927 "oacs": { 00:20:36.927 "security": 0, 00:20:36.927 "format": 1, 00:20:36.927 "firmware": 0, 00:20:36.927 "ns_manage": 1 00:20:36.927 }, 00:20:36.927 "multi_ctrlr": false, 00:20:36.927 "ana_reporting": false 00:20:36.927 }, 00:20:36.927 "vs": { 00:20:36.927 "nvme_version": "1.4" 00:20:36.927 }, 00:20:36.927 "ns_data": { 00:20:36.927 "id": 1, 00:20:36.927 "can_share": false 00:20:36.927 } 00:20:36.927 } 00:20:36.927 ], 00:20:36.927 "mp_policy": "active_passive" 00:20:36.927 } 00:20:36.927 } 00:20:36.927 ]' 00:20:36.927 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:37.189 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:37.189 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:37.189 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:37.189 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:37.189 13:33:25 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:37.189 13:33:25 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:37.189 13:33:25 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:37.189 13:33:25 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:37.189 13:33:25 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:37.189 13:33:25 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:37.449 13:33:25 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=48ccfd19-e773-41e9-bf82-258e9264f390 00:20:37.449 13:33:25 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:37.449 13:33:25 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 48ccfd19-e773-41e9-bf82-258e9264f390 00:20:37.449 13:33:26 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:37.711 13:33:26 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=5b54a49f-a520-49f7-bdab-a730baef84b6 00:20:37.711 13:33:26 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5b54a49f-a520-49f7-bdab-a730baef84b6 00:20:37.972 13:33:26 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:37.972 13:33:26 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:37.972 13:33:26 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:37.972 13:33:26 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:37.972 13:33:26 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:37.972 13:33:26 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:37.972 13:33:26 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:37.972 13:33:26 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:37.972 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:37.972 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:37.972 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:37.972 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:37.972 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:38.233 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:38.233 { 00:20:38.233 "name": "79d57100-adfd-46ab-83ce-12e421e62fd4", 00:20:38.233 "aliases": [ 00:20:38.233 "lvs/nvme0n1p0" 00:20:38.233 ], 00:20:38.233 "product_name": "Logical Volume", 00:20:38.233 "block_size": 4096, 00:20:38.233 "num_blocks": 26476544, 00:20:38.233 "uuid": "79d57100-adfd-46ab-83ce-12e421e62fd4", 00:20:38.233 "assigned_rate_limits": { 00:20:38.233 "rw_ios_per_sec": 0, 00:20:38.233 "rw_mbytes_per_sec": 0, 00:20:38.233 "r_mbytes_per_sec": 0, 00:20:38.233 "w_mbytes_per_sec": 0 00:20:38.233 }, 00:20:38.233 "claimed": false, 00:20:38.233 "zoned": false, 00:20:38.233 "supported_io_types": { 00:20:38.233 "read": true, 00:20:38.233 "write": true, 00:20:38.233 "unmap": true, 00:20:38.233 "flush": false, 00:20:38.233 "reset": true, 00:20:38.233 "nvme_admin": false, 00:20:38.233 "nvme_io": false, 00:20:38.233 "nvme_io_md": false, 00:20:38.233 "write_zeroes": true, 00:20:38.233 "zcopy": false, 00:20:38.233 "get_zone_info": false, 00:20:38.233 "zone_management": false, 00:20:38.233 "zone_append": false, 00:20:38.233 "compare": false, 00:20:38.233 "compare_and_write": false, 00:20:38.233 "abort": false, 00:20:38.233 "seek_hole": true, 00:20:38.233 "seek_data": true, 00:20:38.233 "copy": false, 00:20:38.233 "nvme_iov_md": false 00:20:38.233 }, 00:20:38.233 "driver_specific": { 00:20:38.233 "lvol": { 00:20:38.233 "lvol_store_uuid": "5b54a49f-a520-49f7-bdab-a730baef84b6", 00:20:38.233 "base_bdev": "nvme0n1", 00:20:38.233 "thin_provision": true, 00:20:38.233 "num_allocated_clusters": 0, 00:20:38.233 "snapshot": false, 00:20:38.233 "clone": false, 00:20:38.233 "esnap_clone": false 00:20:38.233 } 00:20:38.233 } 00:20:38.233 } 00:20:38.233 ]' 00:20:38.233 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:38.233 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:38.233 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:38.233 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:38.233 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:38.233 13:33:26 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:38.233 13:33:26 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:38.233 13:33:26 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:38.233 13:33:26 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:38.495 13:33:27 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:38.495 13:33:27 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:38.495 13:33:27 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:38.495 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:38.495 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:38.495 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:38.495 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:38.495 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:38.757 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:38.757 { 00:20:38.757 "name": "79d57100-adfd-46ab-83ce-12e421e62fd4", 00:20:38.757 "aliases": [ 00:20:38.757 "lvs/nvme0n1p0" 00:20:38.757 ], 00:20:38.757 "product_name": "Logical Volume", 00:20:38.757 "block_size": 4096, 00:20:38.757 "num_blocks": 26476544, 00:20:38.757 "uuid": "79d57100-adfd-46ab-83ce-12e421e62fd4", 00:20:38.757 "assigned_rate_limits": { 00:20:38.757 "rw_ios_per_sec": 0, 00:20:38.757 "rw_mbytes_per_sec": 0, 00:20:38.757 "r_mbytes_per_sec": 0, 00:20:38.757 "w_mbytes_per_sec": 0 00:20:38.757 }, 00:20:38.757 "claimed": false, 00:20:38.757 "zoned": false, 00:20:38.757 "supported_io_types": { 00:20:38.757 "read": true, 00:20:38.757 "write": true, 00:20:38.757 "unmap": true, 00:20:38.757 "flush": false, 00:20:38.757 "reset": true, 00:20:38.757 "nvme_admin": false, 00:20:38.757 "nvme_io": false, 00:20:38.757 "nvme_io_md": false, 00:20:38.757 "write_zeroes": true, 00:20:38.757 "zcopy": false, 00:20:38.757 "get_zone_info": false, 00:20:38.757 "zone_management": false, 00:20:38.757 "zone_append": false, 00:20:38.757 "compare": false, 00:20:38.757 "compare_and_write": false, 00:20:38.757 "abort": false, 00:20:38.757 "seek_hole": true, 00:20:38.757 "seek_data": true, 00:20:38.757 "copy": false, 00:20:38.757 "nvme_iov_md": false 00:20:38.757 }, 00:20:38.757 "driver_specific": { 00:20:38.757 "lvol": { 00:20:38.757 "lvol_store_uuid": "5b54a49f-a520-49f7-bdab-a730baef84b6", 00:20:38.757 "base_bdev": "nvme0n1", 00:20:38.757 "thin_provision": true, 00:20:38.757 "num_allocated_clusters": 0, 00:20:38.757 "snapshot": false, 00:20:38.757 "clone": false, 00:20:38.757 "esnap_clone": false 00:20:38.757 } 00:20:38.757 } 00:20:38.757 } 00:20:38.757 ]' 00:20:38.757 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:38.757 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:38.757 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:39.019 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:39.019 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:39.019 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:39.019 13:33:27 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:39.019 13:33:27 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:39.019 13:33:27 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:39.019 13:33:27 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:39.019 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:39.019 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:39.019 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:39.019 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:39.019 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 79d57100-adfd-46ab-83ce-12e421e62fd4 00:20:39.280 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:39.280 { 00:20:39.280 "name": "79d57100-adfd-46ab-83ce-12e421e62fd4", 00:20:39.280 "aliases": [ 00:20:39.280 "lvs/nvme0n1p0" 00:20:39.280 ], 00:20:39.280 "product_name": "Logical Volume", 00:20:39.280 "block_size": 4096, 00:20:39.280 "num_blocks": 26476544, 00:20:39.280 "uuid": "79d57100-adfd-46ab-83ce-12e421e62fd4", 00:20:39.280 "assigned_rate_limits": { 00:20:39.280 "rw_ios_per_sec": 0, 00:20:39.280 "rw_mbytes_per_sec": 0, 00:20:39.280 "r_mbytes_per_sec": 0, 00:20:39.280 "w_mbytes_per_sec": 0 00:20:39.280 }, 00:20:39.280 "claimed": false, 00:20:39.280 "zoned": false, 00:20:39.280 "supported_io_types": { 00:20:39.280 "read": true, 00:20:39.280 "write": true, 00:20:39.280 "unmap": true, 00:20:39.280 "flush": false, 00:20:39.280 "reset": true, 00:20:39.280 "nvme_admin": false, 00:20:39.280 "nvme_io": false, 00:20:39.280 "nvme_io_md": false, 00:20:39.280 "write_zeroes": true, 00:20:39.280 "zcopy": false, 00:20:39.280 "get_zone_info": false, 00:20:39.280 "zone_management": false, 00:20:39.280 "zone_append": false, 00:20:39.280 "compare": false, 00:20:39.280 "compare_and_write": false, 00:20:39.280 "abort": false, 00:20:39.280 "seek_hole": true, 00:20:39.280 "seek_data": true, 00:20:39.280 "copy": false, 00:20:39.280 "nvme_iov_md": false 00:20:39.280 }, 00:20:39.280 "driver_specific": { 00:20:39.280 "lvol": { 00:20:39.280 "lvol_store_uuid": "5b54a49f-a520-49f7-bdab-a730baef84b6", 00:20:39.280 "base_bdev": "nvme0n1", 00:20:39.280 "thin_provision": true, 00:20:39.280 "num_allocated_clusters": 0, 00:20:39.280 "snapshot": false, 00:20:39.280 "clone": false, 00:20:39.280 "esnap_clone": false 00:20:39.280 } 00:20:39.280 } 00:20:39.280 } 00:20:39.280 ]' 00:20:39.280 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:39.280 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:39.280 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:39.542 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:39.542 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:39.542 13:33:27 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:39.542 13:33:27 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:39.542 13:33:27 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 79d57100-adfd-46ab-83ce-12e421e62fd4 --l2p_dram_limit 10' 00:20:39.542 13:33:27 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:39.542 13:33:27 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:39.542 13:33:27 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:39.542 13:33:27 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:39.542 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:39.542 13:33:27 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 79d57100-adfd-46ab-83ce-12e421e62fd4 --l2p_dram_limit 10 -c nvc0n1p0 00:20:39.542 [2024-11-26 13:33:28.071834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.542 [2024-11-26 13:33:28.071907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:39.542 [2024-11-26 13:33:28.071929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.542 [2024-11-26 13:33:28.071938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.542 [2024-11-26 13:33:28.072021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.542 [2024-11-26 13:33:28.072033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.542 [2024-11-26 13:33:28.072044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:39.542 [2024-11-26 13:33:28.072053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.542 [2024-11-26 13:33:28.072081] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:39.542 [2024-11-26 13:33:28.072958] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:39.543 [2024-11-26 13:33:28.072987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.543 [2024-11-26 13:33:28.072996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.543 [2024-11-26 13:33:28.073008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:20:39.543 [2024-11-26 13:33:28.073017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.543 [2024-11-26 13:33:28.073064] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ae6ca256-4021-4c9d-91f8-3274fd083f2c 00:20:39.543 [2024-11-26 13:33:28.075083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.543 [2024-11-26 13:33:28.075129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:39.543 [2024-11-26 13:33:28.075142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:39.543 [2024-11-26 13:33:28.075154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.543 [2024-11-26 13:33:28.085379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.543 [2024-11-26 13:33:28.085435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.543 [2024-11-26 13:33:28.085462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.166 ms 00:20:39.543 [2024-11-26 13:33:28.085473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.543 [2024-11-26 13:33:28.085644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.543 [2024-11-26 13:33:28.085659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.543 [2024-11-26 13:33:28.085669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:39.543 [2024-11-26 13:33:28.085683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.543 [2024-11-26 13:33:28.085754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.543 [2024-11-26 13:33:28.085767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:39.543 [2024-11-26 13:33:28.085777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:39.543 [2024-11-26 13:33:28.085788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.543 [2024-11-26 13:33:28.085813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.543 [2024-11-26 13:33:28.090475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.543 [2024-11-26 13:33:28.090518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.543 [2024-11-26 13:33:28.090533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.666 ms 00:20:39.543 [2024-11-26 13:33:28.090542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.543 [2024-11-26 13:33:28.090586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.543 [2024-11-26 13:33:28.090595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:39.543 [2024-11-26 13:33:28.090607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:39.543 [2024-11-26 13:33:28.090615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.543 [2024-11-26 13:33:28.090657] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:39.543 [2024-11-26 13:33:28.090808] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:39.543 [2024-11-26 13:33:28.090833] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:39.543 [2024-11-26 13:33:28.090845] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:39.543 [2024-11-26 13:33:28.090860] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:39.543 [2024-11-26 13:33:28.090869] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:39.543 [2024-11-26 13:33:28.090880] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:39.543 [2024-11-26 13:33:28.090888] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:39.543 [2024-11-26 13:33:28.090914] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:39.543 [2024-11-26 13:33:28.090922] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:39.543 [2024-11-26 13:33:28.090933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.543 [2024-11-26 13:33:28.090949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:39.543 [2024-11-26 13:33:28.090960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:39.543 [2024-11-26 13:33:28.090968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.543 [2024-11-26 13:33:28.091057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.543 [2024-11-26 13:33:28.091067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:39.543 [2024-11-26 13:33:28.091077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:39.543 [2024-11-26 13:33:28.091084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.543 [2024-11-26 13:33:28.091197] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:39.543 [2024-11-26 13:33:28.091207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:39.543 [2024-11-26 13:33:28.091218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.543 [2024-11-26 13:33:28.091227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:39.543 [2024-11-26 13:33:28.091246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:39.543 [2024-11-26 13:33:28.091262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:39.543 [2024-11-26 13:33:28.091272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.543 [2024-11-26 13:33:28.091288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:39.543 [2024-11-26 13:33:28.091295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:39.543 [2024-11-26 13:33:28.091303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.543 [2024-11-26 13:33:28.091311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:39.543 [2024-11-26 13:33:28.091320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:39.543 [2024-11-26 13:33:28.091327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:39.543 [2024-11-26 13:33:28.091347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:39.543 [2024-11-26 13:33:28.091358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:39.543 [2024-11-26 13:33:28.091375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.543 [2024-11-26 13:33:28.091391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:39.543 [2024-11-26 13:33:28.091397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.543 [2024-11-26 13:33:28.091414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:39.543 [2024-11-26 13:33:28.091422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.543 [2024-11-26 13:33:28.091437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:39.543 [2024-11-26 13:33:28.091461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.543 [2024-11-26 13:33:28.091476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:39.543 [2024-11-26 13:33:28.091487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.543 [2024-11-26 13:33:28.091503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:39.543 [2024-11-26 13:33:28.091509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:39.543 [2024-11-26 13:33:28.091518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.543 [2024-11-26 13:33:28.091525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:39.543 [2024-11-26 13:33:28.091533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:39.543 [2024-11-26 13:33:28.091539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:39.543 [2024-11-26 13:33:28.091555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:39.543 [2024-11-26 13:33:28.091564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091570] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:39.543 [2024-11-26 13:33:28.091579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:39.543 [2024-11-26 13:33:28.091587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.543 [2024-11-26 13:33:28.091597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.543 [2024-11-26 13:33:28.091605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:39.543 [2024-11-26 13:33:28.091617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:39.543 [2024-11-26 13:33:28.091627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:39.543 [2024-11-26 13:33:28.091636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:39.543 [2024-11-26 13:33:28.091643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:39.543 [2024-11-26 13:33:28.091652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:39.543 [2024-11-26 13:33:28.091663] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:39.543 [2024-11-26 13:33:28.091679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.543 [2024-11-26 13:33:28.091689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:39.544 [2024-11-26 13:33:28.091699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:39.544 [2024-11-26 13:33:28.091706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:39.544 [2024-11-26 13:33:28.091715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:39.544 [2024-11-26 13:33:28.091722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:39.544 [2024-11-26 13:33:28.091731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:39.544 [2024-11-26 13:33:28.091739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:39.544 [2024-11-26 13:33:28.091748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:39.544 [2024-11-26 13:33:28.091755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:39.544 [2024-11-26 13:33:28.091766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:39.544 [2024-11-26 13:33:28.091773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:39.544 [2024-11-26 13:33:28.091782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:39.544 [2024-11-26 13:33:28.091790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:39.544 [2024-11-26 13:33:28.091801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:39.544 [2024-11-26 13:33:28.091808] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:39.544 [2024-11-26 13:33:28.091818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.544 [2024-11-26 13:33:28.091826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.544 [2024-11-26 13:33:28.091835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:39.544 [2024-11-26 13:33:28.091843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:39.544 [2024-11-26 13:33:28.091853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:39.544 [2024-11-26 13:33:28.091860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.544 [2024-11-26 13:33:28.091870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:39.544 [2024-11-26 13:33:28.091878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:20:39.544 [2024-11-26 13:33:28.091888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.544 [2024-11-26 13:33:28.091929] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:39.544 [2024-11-26 13:33:28.091944] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:43.753 [2024-11-26 13:33:31.478071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.753 [2024-11-26 13:33:31.478134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:43.753 [2024-11-26 13:33:31.478148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3386.128 ms 00:20:43.753 [2024-11-26 13:33:31.478159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.753 [2024-11-26 13:33:31.504065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.753 [2024-11-26 13:33:31.504112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.753 [2024-11-26 13:33:31.504124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.698 ms 00:20:43.753 [2024-11-26 13:33:31.504133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.753 [2024-11-26 13:33:31.504275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.753 [2024-11-26 13:33:31.504287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:43.753 [2024-11-26 13:33:31.504295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:43.753 [2024-11-26 13:33:31.504308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.534961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.535000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.754 [2024-11-26 13:33:31.535012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.619 ms 00:20:43.754 [2024-11-26 13:33:31.535021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.535063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.535072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:43.754 [2024-11-26 13:33:31.535081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:43.754 [2024-11-26 13:33:31.535096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.535498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.535517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:43.754 [2024-11-26 13:33:31.535526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:20:43.754 [2024-11-26 13:33:31.535535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.535656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.535666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:43.754 [2024-11-26 13:33:31.535677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:43.754 [2024-11-26 13:33:31.535688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.549671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.549706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:43.754 [2024-11-26 13:33:31.549716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.966 ms 00:20:43.754 [2024-11-26 13:33:31.549725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.561489] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:43.754 [2024-11-26 13:33:31.564700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.564733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:43.754 [2024-11-26 13:33:31.564748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.881 ms 00:20:43.754 [2024-11-26 13:33:31.564757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.654192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.654260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:43.754 [2024-11-26 13:33:31.654281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.385 ms 00:20:43.754 [2024-11-26 13:33:31.654290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.654520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.654535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:43.754 [2024-11-26 13:33:31.654550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:20:43.754 [2024-11-26 13:33:31.654558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.679844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.679908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:43.754 [2024-11-26 13:33:31.679924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.202 ms 00:20:43.754 [2024-11-26 13:33:31.679932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.704728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.704783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:43.754 [2024-11-26 13:33:31.704801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.722 ms 00:20:43.754 [2024-11-26 13:33:31.704808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.705402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.705425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:43.754 [2024-11-26 13:33:31.705436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:20:43.754 [2024-11-26 13:33:31.705455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.780069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.780132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:43.754 [2024-11-26 13:33:31.780149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.558 ms 00:20:43.754 [2024-11-26 13:33:31.780158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.806410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.806478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:43.754 [2024-11-26 13:33:31.806493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.153 ms 00:20:43.754 [2024-11-26 13:33:31.806500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.832467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.832524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:43.754 [2024-11-26 13:33:31.832538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.923 ms 00:20:43.754 [2024-11-26 13:33:31.832546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.858147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.858195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:43.754 [2024-11-26 13:33:31.858210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.562 ms 00:20:43.754 [2024-11-26 13:33:31.858218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.858250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.858258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:43.754 [2024-11-26 13:33:31.858270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:43.754 [2024-11-26 13:33:31.858278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.858358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:31.858370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:43.754 [2024-11-26 13:33:31.858380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:43.754 [2024-11-26 13:33:31.858388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:31.859345] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3787.103 ms, result 0 00:20:43.754 { 00:20:43.754 "name": "ftl0", 00:20:43.754 "uuid": "ae6ca256-4021-4c9d-91f8-3274fd083f2c" 00:20:43.754 } 00:20:43.754 13:33:31 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:43.754 13:33:31 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:43.754 13:33:32 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:43.754 13:33:32 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:43.754 [2024-11-26 13:33:32.274870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:32.274931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:43.754 [2024-11-26 13:33:32.274944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:43.754 [2024-11-26 13:33:32.274954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:32.274978] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:43.754 [2024-11-26 13:33:32.277615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:32.277648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:43.754 [2024-11-26 13:33:32.277660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:20:43.754 [2024-11-26 13:33:32.277670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:32.277935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:32.277954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:43.754 [2024-11-26 13:33:32.277966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:43.754 [2024-11-26 13:33:32.277975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:32.281219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:32.281239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:43.754 [2024-11-26 13:33:32.281251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:20:43.754 [2024-11-26 13:33:32.281259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:32.287479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:32.287506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:43.754 [2024-11-26 13:33:32.287521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.199 ms 00:20:43.754 [2024-11-26 13:33:32.287529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.754 [2024-11-26 13:33:32.313011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.754 [2024-11-26 13:33:32.313057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:43.754 [2024-11-26 13:33:32.313072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.406 ms 00:20:43.754 [2024-11-26 13:33:32.313081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.017 [2024-11-26 13:33:32.330095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.017 [2024-11-26 13:33:32.330144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:44.017 [2024-11-26 13:33:32.330159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.966 ms 00:20:44.017 [2024-11-26 13:33:32.330167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.017 [2024-11-26 13:33:32.330333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.017 [2024-11-26 13:33:32.330344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:44.017 [2024-11-26 13:33:32.330355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:20:44.017 [2024-11-26 13:33:32.330363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.017 [2024-11-26 13:33:32.355016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.017 [2024-11-26 13:33:32.355064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:44.017 [2024-11-26 13:33:32.355078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.630 ms 00:20:44.017 [2024-11-26 13:33:32.355085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.017 [2024-11-26 13:33:32.379454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.017 [2024-11-26 13:33:32.379499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:44.017 [2024-11-26 13:33:32.379515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.312 ms 00:20:44.017 [2024-11-26 13:33:32.379524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.017 [2024-11-26 13:33:32.402836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.017 [2024-11-26 13:33:32.402891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:44.017 [2024-11-26 13:33:32.402914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.261 ms 00:20:44.017 [2024-11-26 13:33:32.402922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.017 [2024-11-26 13:33:32.426090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.017 [2024-11-26 13:33:32.426130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:44.017 [2024-11-26 13:33:32.426144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.083 ms 00:20:44.017 [2024-11-26 13:33:32.426152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.017 [2024-11-26 13:33:32.426191] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:44.017 [2024-11-26 13:33:32.426206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:44.017 [2024-11-26 13:33:32.426438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.426999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:44.018 [2024-11-26 13:33:32.427115] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:44.018 [2024-11-26 13:33:32.427124] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae6ca256-4021-4c9d-91f8-3274fd083f2c 00:20:44.018 [2024-11-26 13:33:32.427132] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:44.018 [2024-11-26 13:33:32.427143] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:44.018 [2024-11-26 13:33:32.427153] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:44.018 [2024-11-26 13:33:32.427162] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:44.018 [2024-11-26 13:33:32.427170] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:44.018 [2024-11-26 13:33:32.427179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:44.018 [2024-11-26 13:33:32.427186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:44.018 [2024-11-26 13:33:32.427194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:44.018 [2024-11-26 13:33:32.427200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:44.018 [2024-11-26 13:33:32.427209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.018 [2024-11-26 13:33:32.427217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:44.018 [2024-11-26 13:33:32.427226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:20:44.018 [2024-11-26 13:33:32.427236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.018 [2024-11-26 13:33:32.439810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.018 [2024-11-26 13:33:32.439849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:44.018 [2024-11-26 13:33:32.439863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.537 ms 00:20:44.019 [2024-11-26 13:33:32.439871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.019 [2024-11-26 13:33:32.440237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.019 [2024-11-26 13:33:32.440257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:44.019 [2024-11-26 13:33:32.440272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:20:44.019 [2024-11-26 13:33:32.440279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.019 [2024-11-26 13:33:32.481939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.019 [2024-11-26 13:33:32.481986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.019 [2024-11-26 13:33:32.482000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.019 [2024-11-26 13:33:32.482009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.019 [2024-11-26 13:33:32.482080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.019 [2024-11-26 13:33:32.482089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.019 [2024-11-26 13:33:32.482103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.019 [2024-11-26 13:33:32.482112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.019 [2024-11-26 13:33:32.482204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.019 [2024-11-26 13:33:32.482215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.019 [2024-11-26 13:33:32.482227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.019 [2024-11-26 13:33:32.482237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.019 [2024-11-26 13:33:32.482259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.019 [2024-11-26 13:33:32.482268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.019 [2024-11-26 13:33:32.482280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.019 [2024-11-26 13:33:32.482292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.019 [2024-11-26 13:33:32.559850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.019 [2024-11-26 13:33:32.559896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.019 [2024-11-26 13:33:32.559911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.019 [2024-11-26 13:33:32.559919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.281 [2024-11-26 13:33:32.624831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.281 [2024-11-26 13:33:32.624889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.281 [2024-11-26 13:33:32.624904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.281 [2024-11-26 13:33:32.624916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.281 [2024-11-26 13:33:32.625003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.281 [2024-11-26 13:33:32.625013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.281 [2024-11-26 13:33:32.625023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.281 [2024-11-26 13:33:32.625030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.281 [2024-11-26 13:33:32.625095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.281 [2024-11-26 13:33:32.625133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.281 [2024-11-26 13:33:32.625144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.281 [2024-11-26 13:33:32.625151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.281 [2024-11-26 13:33:32.625244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.281 [2024-11-26 13:33:32.625260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.281 [2024-11-26 13:33:32.625269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.281 [2024-11-26 13:33:32.625277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.281 [2024-11-26 13:33:32.625310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.281 [2024-11-26 13:33:32.625323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:44.281 [2024-11-26 13:33:32.625334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.281 [2024-11-26 13:33:32.625341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.281 [2024-11-26 13:33:32.625381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.281 [2024-11-26 13:33:32.625395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.281 [2024-11-26 13:33:32.625406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.281 [2024-11-26 13:33:32.625414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.281 [2024-11-26 13:33:32.625479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.281 [2024-11-26 13:33:32.625491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.281 [2024-11-26 13:33:32.625501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.281 [2024-11-26 13:33:32.625509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.282 [2024-11-26 13:33:32.625640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.741 ms, result 0 00:20:44.282 true 00:20:44.282 13:33:32 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77219 00:20:44.282 13:33:32 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77219 ']' 00:20:44.282 13:33:32 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77219 00:20:44.282 13:33:32 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:44.282 13:33:32 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.282 13:33:32 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77219 00:20:44.282 killing process with pid 77219 00:20:44.282 13:33:32 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.282 13:33:32 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.282 13:33:32 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77219' 00:20:44.282 13:33:32 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77219 00:20:44.282 13:33:32 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77219 00:20:56.549 13:33:43 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:59.084 262144+0 records in 00:20:59.084 262144+0 records out 00:20:59.084 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.10592 s, 262 MB/s 00:20:59.084 13:33:47 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:01.631 13:33:49 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:01.631 [2024-11-26 13:33:49.794698] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:21:01.631 [2024-11-26 13:33:49.795319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77453 ] 00:21:01.631 [2024-11-26 13:33:49.945408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.631 [2024-11-26 13:33:50.028258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.894 [2024-11-26 13:33:50.238306] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.894 [2024-11-26 13:33:50.238355] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:01.894 [2024-11-26 13:33:50.385250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.894 [2024-11-26 13:33:50.385289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:01.894 [2024-11-26 13:33:50.385300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:01.894 [2024-11-26 13:33:50.385306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.894 [2024-11-26 13:33:50.385339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.894 [2024-11-26 13:33:50.385348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.894 [2024-11-26 13:33:50.385355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:01.894 [2024-11-26 13:33:50.385361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.894 [2024-11-26 13:33:50.385373] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:01.894 [2024-11-26 13:33:50.385888] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:01.894 [2024-11-26 13:33:50.385904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.894 [2024-11-26 13:33:50.385910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.894 [2024-11-26 13:33:50.385917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:21:01.894 [2024-11-26 13:33:50.385922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.894 [2024-11-26 13:33:50.386874] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:01.894 [2024-11-26 13:33:50.396532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.894 [2024-11-26 13:33:50.396556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:01.894 [2024-11-26 13:33:50.396564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.658 ms 00:21:01.894 [2024-11-26 13:33:50.396570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.894 [2024-11-26 13:33:50.396612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.894 [2024-11-26 13:33:50.396620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:01.894 [2024-11-26 13:33:50.396626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:01.894 [2024-11-26 13:33:50.396632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.894 [2024-11-26 13:33:50.401067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.894 [2024-11-26 13:33:50.401087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.894 [2024-11-26 13:33:50.401095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.398 ms 00:21:01.894 [2024-11-26 13:33:50.401104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.894 [2024-11-26 13:33:50.401157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.894 [2024-11-26 13:33:50.401164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.894 [2024-11-26 13:33:50.401170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:01.894 [2024-11-26 13:33:50.401176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.894 [2024-11-26 13:33:50.401212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.894 [2024-11-26 13:33:50.401220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:01.894 [2024-11-26 13:33:50.401225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:01.894 [2024-11-26 13:33:50.401231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.894 [2024-11-26 13:33:50.401246] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:01.894 [2024-11-26 13:33:50.403896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.894 [2024-11-26 13:33:50.403916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.894 [2024-11-26 13:33:50.403925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.653 ms 00:21:01.894 [2024-11-26 13:33:50.403930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.894 [2024-11-26 13:33:50.403955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.894 [2024-11-26 13:33:50.403961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:01.894 [2024-11-26 13:33:50.403968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:01.894 [2024-11-26 13:33:50.403973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.894 [2024-11-26 13:33:50.403986] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:01.894 [2024-11-26 13:33:50.404001] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:01.894 [2024-11-26 13:33:50.404028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:01.895 [2024-11-26 13:33:50.404041] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:01.895 [2024-11-26 13:33:50.404121] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:01.895 [2024-11-26 13:33:50.404134] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:01.895 [2024-11-26 13:33:50.404143] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:01.895 [2024-11-26 13:33:50.404150] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404157] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404163] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:01.895 [2024-11-26 13:33:50.404168] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:01.895 [2024-11-26 13:33:50.404174] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:01.895 [2024-11-26 13:33:50.404182] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:01.895 [2024-11-26 13:33:50.404188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.895 [2024-11-26 13:33:50.404193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:01.895 [2024-11-26 13:33:50.404199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:21:01.895 [2024-11-26 13:33:50.404204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.895 [2024-11-26 13:33:50.404269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.895 [2024-11-26 13:33:50.404276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:01.895 [2024-11-26 13:33:50.404281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:01.895 [2024-11-26 13:33:50.404287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.895 [2024-11-26 13:33:50.404365] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:01.895 [2024-11-26 13:33:50.404373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:01.895 [2024-11-26 13:33:50.404379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:01.895 [2024-11-26 13:33:50.404397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:01.895 [2024-11-26 13:33:50.404413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.895 [2024-11-26 13:33:50.404425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:01.895 [2024-11-26 13:33:50.404430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:01.895 [2024-11-26 13:33:50.404435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:01.895 [2024-11-26 13:33:50.404454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:01.895 [2024-11-26 13:33:50.404459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:01.895 [2024-11-26 13:33:50.404464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:01.895 [2024-11-26 13:33:50.404475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:01.895 [2024-11-26 13:33:50.404493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:01.895 [2024-11-26 13:33:50.404509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:01.895 [2024-11-26 13:33:50.404525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:01.895 [2024-11-26 13:33:50.404540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:01.895 [2024-11-26 13:33:50.404556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.895 [2024-11-26 13:33:50.404566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:01.895 [2024-11-26 13:33:50.404572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:01.895 [2024-11-26 13:33:50.404577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:01.895 [2024-11-26 13:33:50.404582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:01.895 [2024-11-26 13:33:50.404587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:01.895 [2024-11-26 13:33:50.404593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:01.895 [2024-11-26 13:33:50.404603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:01.895 [2024-11-26 13:33:50.404609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404614] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:01.895 [2024-11-26 13:33:50.404620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:01.895 [2024-11-26 13:33:50.404626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:01.895 [2024-11-26 13:33:50.404637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:01.895 [2024-11-26 13:33:50.404642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:01.895 [2024-11-26 13:33:50.404647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:01.895 [2024-11-26 13:33:50.404652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:01.895 [2024-11-26 13:33:50.404658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:01.895 [2024-11-26 13:33:50.404663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:01.895 [2024-11-26 13:33:50.404669] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:01.895 [2024-11-26 13:33:50.404676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.895 [2024-11-26 13:33:50.404684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:01.895 [2024-11-26 13:33:50.404690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:01.895 [2024-11-26 13:33:50.404695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:01.895 [2024-11-26 13:33:50.404701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:01.895 [2024-11-26 13:33:50.404706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:01.895 [2024-11-26 13:33:50.404712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:01.895 [2024-11-26 13:33:50.404717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:01.895 [2024-11-26 13:33:50.404723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:01.895 [2024-11-26 13:33:50.404728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:01.895 [2024-11-26 13:33:50.404734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:01.895 [2024-11-26 13:33:50.404739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:01.895 [2024-11-26 13:33:50.404745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:01.895 [2024-11-26 13:33:50.404750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:01.895 [2024-11-26 13:33:50.404756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:01.895 [2024-11-26 13:33:50.404762] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:01.895 [2024-11-26 13:33:50.404768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:01.895 [2024-11-26 13:33:50.404774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:01.895 [2024-11-26 13:33:50.404780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:01.895 [2024-11-26 13:33:50.404786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:01.895 [2024-11-26 13:33:50.404791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:01.895 [2024-11-26 13:33:50.404797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.895 [2024-11-26 13:33:50.404803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:01.895 [2024-11-26 13:33:50.404809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:21:01.895 [2024-11-26 13:33:50.404815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.895 [2024-11-26 13:33:50.426051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.896 [2024-11-26 13:33:50.426077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.896 [2024-11-26 13:33:50.426085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.205 ms 00:21:01.896 [2024-11-26 13:33:50.426093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.896 [2024-11-26 13:33:50.426155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.896 [2024-11-26 13:33:50.426161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:01.896 [2024-11-26 13:33:50.426167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:01.896 [2024-11-26 13:33:50.426173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-11-26 13:33:50.465519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-11-26 13:33:50.465548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.157 [2024-11-26 13:33:50.465557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.303 ms 00:21:02.157 [2024-11-26 13:33:50.465564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.157 [2024-11-26 13:33:50.465597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.157 [2024-11-26 13:33:50.465604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.157 [2024-11-26 13:33:50.465613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:02.158 [2024-11-26 13:33:50.465619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.465935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.465954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.158 [2024-11-26 13:33:50.465962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:21:02.158 [2024-11-26 13:33:50.465968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.466068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.466075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.158 [2024-11-26 13:33:50.466082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:02.158 [2024-11-26 13:33:50.466091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.476545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.476567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.158 [2024-11-26 13:33:50.476576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.439 ms 00:21:02.158 [2024-11-26 13:33:50.476582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.486138] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:02.158 [2024-11-26 13:33:50.486163] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:02.158 [2024-11-26 13:33:50.486173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.486180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:02.158 [2024-11-26 13:33:50.486187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.520 ms 00:21:02.158 [2024-11-26 13:33:50.486193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.504760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.504799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:02.158 [2024-11-26 13:33:50.504808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.534 ms 00:21:02.158 [2024-11-26 13:33:50.504814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.513643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.513666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:02.158 [2024-11-26 13:33:50.513673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.801 ms 00:21:02.158 [2024-11-26 13:33:50.513679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.522407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.522429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:02.158 [2024-11-26 13:33:50.522436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.703 ms 00:21:02.158 [2024-11-26 13:33:50.522451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.522918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.522935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:02.158 [2024-11-26 13:33:50.522948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:21:02.158 [2024-11-26 13:33:50.522956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.565952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.565990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:02.158 [2024-11-26 13:33:50.566000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.984 ms 00:21:02.158 [2024-11-26 13:33:50.566010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.573727] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:02.158 [2024-11-26 13:33:50.575555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.575576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.158 [2024-11-26 13:33:50.575584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.509 ms 00:21:02.158 [2024-11-26 13:33:50.575591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.575652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.575661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:02.158 [2024-11-26 13:33:50.575669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:02.158 [2024-11-26 13:33:50.575675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.575720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.575727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:02.158 [2024-11-26 13:33:50.575735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:02.158 [2024-11-26 13:33:50.575742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.575758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.575765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:02.158 [2024-11-26 13:33:50.575772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:02.158 [2024-11-26 13:33:50.575778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.575803] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:02.158 [2024-11-26 13:33:50.575812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.575819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:02.158 [2024-11-26 13:33:50.575825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:02.158 [2024-11-26 13:33:50.575832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.593157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.593181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:02.158 [2024-11-26 13:33:50.593190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.311 ms 00:21:02.158 [2024-11-26 13:33:50.593197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.593253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.158 [2024-11-26 13:33:50.593261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:02.158 [2024-11-26 13:33:50.593267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:02.158 [2024-11-26 13:33:50.593273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.158 [2024-11-26 13:33:50.594294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 208.695 ms, result 0 00:21:03.102  [2024-11-26T13:33:52.616Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-26T13:33:54.019Z] Copying: 40/1024 [MB] (19 MBps) [2024-11-26T13:33:54.960Z] Copying: 56/1024 [MB] (16 MBps) [2024-11-26T13:33:55.903Z] Copying: 71/1024 [MB] (15 MBps) [2024-11-26T13:33:56.843Z] Copying: 92/1024 [MB] (20 MBps) [2024-11-26T13:33:57.785Z] Copying: 106/1024 [MB] (14 MBps) [2024-11-26T13:33:58.729Z] Copying: 117/1024 [MB] (10 MBps) [2024-11-26T13:33:59.673Z] Copying: 130360/1048576 [kB] (9888 kBps) [2024-11-26T13:34:00.616Z] Copying: 140364/1048576 [kB] (10004 kBps) [2024-11-26T13:34:02.003Z] Copying: 149904/1048576 [kB] (9540 kBps) [2024-11-26T13:34:02.947Z] Copying: 159880/1048576 [kB] (9976 kBps) [2024-11-26T13:34:03.887Z] Copying: 170000/1048576 [kB] (10120 kBps) [2024-11-26T13:34:04.827Z] Copying: 176/1024 [MB] (10 MBps) [2024-11-26T13:34:05.816Z] Copying: 190928/1048576 [kB] (9984 kBps) [2024-11-26T13:34:06.773Z] Copying: 201064/1048576 [kB] (10136 kBps) [2024-11-26T13:34:07.718Z] Copying: 210768/1048576 [kB] (9704 kBps) [2024-11-26T13:34:08.661Z] Copying: 220288/1048576 [kB] (9520 kBps) [2024-11-26T13:34:10.049Z] Copying: 229736/1048576 [kB] (9448 kBps) [2024-11-26T13:34:10.622Z] Copying: 238528/1048576 [kB] (8792 kBps) [2024-11-26T13:34:12.008Z] Copying: 248128/1048576 [kB] (9600 kBps) [2024-11-26T13:34:12.949Z] Copying: 257464/1048576 [kB] (9336 kBps) [2024-11-26T13:34:13.894Z] Copying: 266720/1048576 [kB] (9256 kBps) [2024-11-26T13:34:14.832Z] Copying: 276492/1048576 [kB] (9772 kBps) [2024-11-26T13:34:15.769Z] Copying: 286540/1048576 [kB] (10048 kBps) [2024-11-26T13:34:16.704Z] Copying: 291/1024 [MB] (11 MBps) [2024-11-26T13:34:17.641Z] Copying: 303/1024 [MB] (12 MBps) [2024-11-26T13:34:19.020Z] Copying: 314/1024 [MB] (11 MBps) [2024-11-26T13:34:19.964Z] Copying: 324/1024 [MB] (10 MBps) [2024-11-26T13:34:20.905Z] Copying: 335/1024 [MB] (10 MBps) [2024-11-26T13:34:21.840Z] Copying: 353224/1048576 [kB] (9900 kBps) [2024-11-26T13:34:22.775Z] Copying: 356/1024 [MB] (12 MBps) [2024-11-26T13:34:23.708Z] Copying: 368/1024 [MB] (11 MBps) [2024-11-26T13:34:24.641Z] Copying: 380/1024 [MB] (11 MBps) [2024-11-26T13:34:26.015Z] Copying: 392/1024 [MB] (11 MBps) [2024-11-26T13:34:26.951Z] Copying: 404/1024 [MB] (12 MBps) [2024-11-26T13:34:27.886Z] Copying: 416/1024 [MB] (12 MBps) [2024-11-26T13:34:28.821Z] Copying: 428/1024 [MB] (11 MBps) [2024-11-26T13:34:29.755Z] Copying: 440/1024 [MB] (11 MBps) [2024-11-26T13:34:30.689Z] Copying: 450/1024 [MB] (10 MBps) [2024-11-26T13:34:31.624Z] Copying: 461/1024 [MB] (10 MBps) [2024-11-26T13:34:32.998Z] Copying: 471/1024 [MB] (10 MBps) [2024-11-26T13:34:33.932Z] Copying: 482/1024 [MB] (10 MBps) [2024-11-26T13:34:34.896Z] Copying: 494/1024 [MB] (12 MBps) [2024-11-26T13:34:35.828Z] Copying: 512/1024 [MB] (17 MBps) [2024-11-26T13:34:36.809Z] Copying: 523/1024 [MB] (11 MBps) [2024-11-26T13:34:37.746Z] Copying: 534/1024 [MB] (11 MBps) [2024-11-26T13:34:38.681Z] Copying: 546/1024 [MB] (11 MBps) [2024-11-26T13:34:39.616Z] Copying: 560/1024 [MB] (13 MBps) [2024-11-26T13:34:40.992Z] Copying: 573/1024 [MB] (12 MBps) [2024-11-26T13:34:41.926Z] Copying: 585/1024 [MB] (12 MBps) [2024-11-26T13:34:42.861Z] Copying: 597/1024 [MB] (11 MBps) [2024-11-26T13:34:43.796Z] Copying: 614/1024 [MB] (16 MBps) [2024-11-26T13:34:44.732Z] Copying: 631/1024 [MB] (17 MBps) [2024-11-26T13:34:45.666Z] Copying: 650/1024 [MB] (18 MBps) [2024-11-26T13:34:47.042Z] Copying: 666/1024 [MB] (16 MBps) [2024-11-26T13:34:47.978Z] Copying: 681/1024 [MB] (15 MBps) [2024-11-26T13:34:48.915Z] Copying: 695/1024 [MB] (14 MBps) [2024-11-26T13:34:49.848Z] Copying: 740/1024 [MB] (45 MBps) [2024-11-26T13:34:50.783Z] Copying: 785/1024 [MB] (44 MBps) [2024-11-26T13:34:51.718Z] Copying: 829/1024 [MB] (43 MBps) [2024-11-26T13:34:52.652Z] Copying: 874/1024 [MB] (45 MBps) [2024-11-26T13:34:54.025Z] Copying: 918/1024 [MB] (43 MBps) [2024-11-26T13:34:54.960Z] Copying: 964/1024 [MB] (45 MBps) [2024-11-26T13:34:54.960Z] Copying: 1011/1024 [MB] (47 MBps) [2024-11-26T13:34:54.960Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-26 13:34:54.880272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.390 [2024-11-26 13:34:54.880330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:06.390 [2024-11-26 13:34:54.880345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:06.390 [2024-11-26 13:34:54.880354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.390 [2024-11-26 13:34:54.880376] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:06.390 [2024-11-26 13:34:54.883011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.390 [2024-11-26 13:34:54.883054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:06.390 [2024-11-26 13:34:54.883068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:22:06.390 [2024-11-26 13:34:54.883102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.390 [2024-11-26 13:34:54.884602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.390 [2024-11-26 13:34:54.884644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:06.390 [2024-11-26 13:34:54.884655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:22:06.390 [2024-11-26 13:34:54.884663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.390 [2024-11-26 13:34:54.897521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.390 [2024-11-26 13:34:54.897582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:06.391 [2024-11-26 13:34:54.897595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.839 ms 00:22:06.391 [2024-11-26 13:34:54.897603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.391 [2024-11-26 13:34:54.903772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.391 [2024-11-26 13:34:54.903823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:06.391 [2024-11-26 13:34:54.903835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:22:06.391 [2024-11-26 13:34:54.903844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.391 [2024-11-26 13:34:54.929743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.391 [2024-11-26 13:34:54.929811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:06.391 [2024-11-26 13:34:54.929827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.824 ms 00:22:06.391 [2024-11-26 13:34:54.929836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.391 [2024-11-26 13:34:54.945228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.391 [2024-11-26 13:34:54.945289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:06.391 [2024-11-26 13:34:54.945303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.336 ms 00:22:06.391 [2024-11-26 13:34:54.945312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.391 [2024-11-26 13:34:54.945493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.391 [2024-11-26 13:34:54.945514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:06.391 [2024-11-26 13:34:54.945522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:22:06.391 [2024-11-26 13:34:54.945530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.651 [2024-11-26 13:34:54.971075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.651 [2024-11-26 13:34:54.971140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:06.651 [2024-11-26 13:34:54.971154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.528 ms 00:22:06.651 [2024-11-26 13:34:54.971162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.651 [2024-11-26 13:34:54.995495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.651 [2024-11-26 13:34:54.995554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:06.651 [2024-11-26 13:34:54.995566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.278 ms 00:22:06.651 [2024-11-26 13:34:54.995574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.651 [2024-11-26 13:34:55.020295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.651 [2024-11-26 13:34:55.020353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:06.651 [2024-11-26 13:34:55.020365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.667 ms 00:22:06.651 [2024-11-26 13:34:55.020373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.651 [2024-11-26 13:34:55.045217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.651 [2024-11-26 13:34:55.045274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:06.651 [2024-11-26 13:34:55.045286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.727 ms 00:22:06.651 [2024-11-26 13:34:55.045295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.651 [2024-11-26 13:34:55.045356] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:06.651 [2024-11-26 13:34:55.045373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:06.651 [2024-11-26 13:34:55.045890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.045995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:06.652 [2024-11-26 13:34:55.046173] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:06.652 [2024-11-26 13:34:55.046181] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae6ca256-4021-4c9d-91f8-3274fd083f2c 00:22:06.652 [2024-11-26 13:34:55.046192] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:06.652 [2024-11-26 13:34:55.046199] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:06.652 [2024-11-26 13:34:55.046206] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:06.652 [2024-11-26 13:34:55.046213] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:06.652 [2024-11-26 13:34:55.046220] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:06.652 [2024-11-26 13:34:55.046235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:06.652 [2024-11-26 13:34:55.046242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:06.652 [2024-11-26 13:34:55.046248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:06.652 [2024-11-26 13:34:55.046254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:06.652 [2024-11-26 13:34:55.046262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.652 [2024-11-26 13:34:55.046269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:06.652 [2024-11-26 13:34:55.046277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:22:06.652 [2024-11-26 13:34:55.046285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.652 [2024-11-26 13:34:55.058807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.652 [2024-11-26 13:34:55.058864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:06.652 [2024-11-26 13:34:55.058877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.501 ms 00:22:06.652 [2024-11-26 13:34:55.058885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.652 [2024-11-26 13:34:55.059275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.652 [2024-11-26 13:34:55.059293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:06.652 [2024-11-26 13:34:55.059302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:22:06.652 [2024-11-26 13:34:55.059317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.652 [2024-11-26 13:34:55.092686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.652 [2024-11-26 13:34:55.092748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.652 [2024-11-26 13:34:55.092759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.652 [2024-11-26 13:34:55.092767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.652 [2024-11-26 13:34:55.092834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.652 [2024-11-26 13:34:55.092843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.652 [2024-11-26 13:34:55.092851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.652 [2024-11-26 13:34:55.092863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.652 [2024-11-26 13:34:55.092936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.652 [2024-11-26 13:34:55.092946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.652 [2024-11-26 13:34:55.092953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.652 [2024-11-26 13:34:55.092961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.652 [2024-11-26 13:34:55.092976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.652 [2024-11-26 13:34:55.092984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.652 [2024-11-26 13:34:55.092992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.652 [2024-11-26 13:34:55.092999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.652 [2024-11-26 13:34:55.171227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.652 [2024-11-26 13:34:55.171291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.652 [2024-11-26 13:34:55.171304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.652 [2024-11-26 13:34:55.171312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.911 [2024-11-26 13:34:55.235655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.911 [2024-11-26 13:34:55.235713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.911 [2024-11-26 13:34:55.235725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.911 [2024-11-26 13:34:55.235732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.911 [2024-11-26 13:34:55.235816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.911 [2024-11-26 13:34:55.235826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.911 [2024-11-26 13:34:55.235834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.911 [2024-11-26 13:34:55.235841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.911 [2024-11-26 13:34:55.235873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.911 [2024-11-26 13:34:55.235882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.911 [2024-11-26 13:34:55.235890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.911 [2024-11-26 13:34:55.235897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.911 [2024-11-26 13:34:55.235984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.911 [2024-11-26 13:34:55.235996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.911 [2024-11-26 13:34:55.236004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.911 [2024-11-26 13:34:55.236011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.911 [2024-11-26 13:34:55.236043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.911 [2024-11-26 13:34:55.236052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:06.911 [2024-11-26 13:34:55.236060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.911 [2024-11-26 13:34:55.236067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.911 [2024-11-26 13:34:55.236101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.911 [2024-11-26 13:34:55.236127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.911 [2024-11-26 13:34:55.236134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.911 [2024-11-26 13:34:55.236141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.911 [2024-11-26 13:34:55.236180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.911 [2024-11-26 13:34:55.236189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.911 [2024-11-26 13:34:55.236197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.911 [2024-11-26 13:34:55.236204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.911 [2024-11-26 13:34:55.236316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.016 ms, result 0 00:22:08.285 00:22:08.285 00:22:08.285 13:34:56 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:08.285 [2024-11-26 13:34:56.580857] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:22:08.286 [2024-11-26 13:34:56.580994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78135 ] 00:22:08.286 [2024-11-26 13:34:56.744756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.286 [2024-11-26 13:34:56.848783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.544 [2024-11-26 13:34:57.107721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.544 [2024-11-26 13:34:57.107790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.803 [2024-11-26 13:34:57.261409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.261482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:08.803 [2024-11-26 13:34:57.261496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:08.803 [2024-11-26 13:34:57.261504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.261558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.261570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.803 [2024-11-26 13:34:57.261578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:08.803 [2024-11-26 13:34:57.261585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.261605] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:08.803 [2024-11-26 13:34:57.262353] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:08.803 [2024-11-26 13:34:57.262368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.262376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.803 [2024-11-26 13:34:57.262384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:22:08.803 [2024-11-26 13:34:57.262392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.263547] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:08.803 [2024-11-26 13:34:57.275952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.276000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:08.803 [2024-11-26 13:34:57.276013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.404 ms 00:22:08.803 [2024-11-26 13:34:57.276021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.276108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.276118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:08.803 [2024-11-26 13:34:57.276127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:08.803 [2024-11-26 13:34:57.276134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.281749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.281790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.803 [2024-11-26 13:34:57.281800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.533 ms 00:22:08.803 [2024-11-26 13:34:57.281812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.281893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.281903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.803 [2024-11-26 13:34:57.281911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:08.803 [2024-11-26 13:34:57.281919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.281974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.281984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:08.803 [2024-11-26 13:34:57.281992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:08.803 [2024-11-26 13:34:57.281999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.282024] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:08.803 [2024-11-26 13:34:57.285642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.285674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.803 [2024-11-26 13:34:57.285686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.624 ms 00:22:08.803 [2024-11-26 13:34:57.285694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.285728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.285737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:08.803 [2024-11-26 13:34:57.285746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:08.803 [2024-11-26 13:34:57.285753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.285774] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:08.803 [2024-11-26 13:34:57.285793] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:08.803 [2024-11-26 13:34:57.285827] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:08.803 [2024-11-26 13:34:57.285844] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:08.803 [2024-11-26 13:34:57.285946] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:08.803 [2024-11-26 13:34:57.285961] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:08.803 [2024-11-26 13:34:57.285972] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:08.803 [2024-11-26 13:34:57.285982] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:08.803 [2024-11-26 13:34:57.285991] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:08.803 [2024-11-26 13:34:57.285999] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:08.803 [2024-11-26 13:34:57.286006] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:08.803 [2024-11-26 13:34:57.286013] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:08.803 [2024-11-26 13:34:57.286024] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:08.803 [2024-11-26 13:34:57.286031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.286039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:08.803 [2024-11-26 13:34:57.286046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:22:08.803 [2024-11-26 13:34:57.286053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.286135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.803 [2024-11-26 13:34:57.286144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:08.803 [2024-11-26 13:34:57.286151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:08.803 [2024-11-26 13:34:57.286158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.803 [2024-11-26 13:34:57.286293] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:08.803 [2024-11-26 13:34:57.286305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:08.803 [2024-11-26 13:34:57.286313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.803 [2024-11-26 13:34:57.286321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.803 [2024-11-26 13:34:57.286328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:08.803 [2024-11-26 13:34:57.286335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:08.803 [2024-11-26 13:34:57.286342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:08.803 [2024-11-26 13:34:57.286349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:08.803 [2024-11-26 13:34:57.286356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:08.803 [2024-11-26 13:34:57.286362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.803 [2024-11-26 13:34:57.286369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:08.803 [2024-11-26 13:34:57.286375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:08.803 [2024-11-26 13:34:57.286384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.803 [2024-11-26 13:34:57.286397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:08.804 [2024-11-26 13:34:57.286404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:08.804 [2024-11-26 13:34:57.286410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.804 [2024-11-26 13:34:57.286417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:08.804 [2024-11-26 13:34:57.286423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:08.804 [2024-11-26 13:34:57.286429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.804 [2024-11-26 13:34:57.286437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:08.804 [2024-11-26 13:34:57.286456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:08.804 [2024-11-26 13:34:57.286463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.804 [2024-11-26 13:34:57.286470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:08.804 [2024-11-26 13:34:57.286477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:08.804 [2024-11-26 13:34:57.286483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.804 [2024-11-26 13:34:57.286490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:08.804 [2024-11-26 13:34:57.286496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:08.804 [2024-11-26 13:34:57.286503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.804 [2024-11-26 13:34:57.286509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:08.804 [2024-11-26 13:34:57.286516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:08.804 [2024-11-26 13:34:57.286522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.804 [2024-11-26 13:34:57.286528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:08.804 [2024-11-26 13:34:57.286535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:08.804 [2024-11-26 13:34:57.286542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.804 [2024-11-26 13:34:57.286548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:08.804 [2024-11-26 13:34:57.286555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:08.804 [2024-11-26 13:34:57.286561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.804 [2024-11-26 13:34:57.286568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:08.804 [2024-11-26 13:34:57.286574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:08.804 [2024-11-26 13:34:57.286580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.804 [2024-11-26 13:34:57.286586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:08.804 [2024-11-26 13:34:57.286593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:08.804 [2024-11-26 13:34:57.286599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.804 [2024-11-26 13:34:57.286606] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:08.804 [2024-11-26 13:34:57.286614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:08.804 [2024-11-26 13:34:57.286621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.804 [2024-11-26 13:34:57.286628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.804 [2024-11-26 13:34:57.286636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:08.804 [2024-11-26 13:34:57.286642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:08.804 [2024-11-26 13:34:57.286649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:08.804 [2024-11-26 13:34:57.286656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:08.804 [2024-11-26 13:34:57.286662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:08.804 [2024-11-26 13:34:57.286668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:08.804 [2024-11-26 13:34:57.286676] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:08.804 [2024-11-26 13:34:57.286686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.804 [2024-11-26 13:34:57.286697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:08.804 [2024-11-26 13:34:57.286704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:08.804 [2024-11-26 13:34:57.286710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:08.804 [2024-11-26 13:34:57.286718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:08.804 [2024-11-26 13:34:57.286725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:08.804 [2024-11-26 13:34:57.286731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:08.804 [2024-11-26 13:34:57.286738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:08.804 [2024-11-26 13:34:57.286745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:08.804 [2024-11-26 13:34:57.286752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:08.804 [2024-11-26 13:34:57.286759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:08.804 [2024-11-26 13:34:57.286765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:08.804 [2024-11-26 13:34:57.286772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:08.804 [2024-11-26 13:34:57.286779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:08.804 [2024-11-26 13:34:57.286786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:08.804 [2024-11-26 13:34:57.286793] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:08.804 [2024-11-26 13:34:57.286801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.804 [2024-11-26 13:34:57.286808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:08.804 [2024-11-26 13:34:57.286816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:08.804 [2024-11-26 13:34:57.286823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:08.804 [2024-11-26 13:34:57.286830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:08.804 [2024-11-26 13:34:57.286837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.804 [2024-11-26 13:34:57.286845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:08.804 [2024-11-26 13:34:57.286852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:22:08.804 [2024-11-26 13:34:57.286859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.804 [2024-11-26 13:34:57.313512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.804 [2024-11-26 13:34:57.313556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.804 [2024-11-26 13:34:57.313568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.610 ms 00:22:08.804 [2024-11-26 13:34:57.313578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.804 [2024-11-26 13:34:57.313674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.804 [2024-11-26 13:34:57.313682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:08.804 [2024-11-26 13:34:57.313691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:08.804 [2024-11-26 13:34:57.313698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.804 [2024-11-26 13:34:57.357884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.804 [2024-11-26 13:34:57.357936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.804 [2024-11-26 13:34:57.357950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.115 ms 00:22:08.804 [2024-11-26 13:34:57.357958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.804 [2024-11-26 13:34:57.358017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.804 [2024-11-26 13:34:57.358028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.804 [2024-11-26 13:34:57.358041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:08.804 [2024-11-26 13:34:57.358048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.804 [2024-11-26 13:34:57.358478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.804 [2024-11-26 13:34:57.358501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.804 [2024-11-26 13:34:57.358512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:22:08.804 [2024-11-26 13:34:57.358519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.804 [2024-11-26 13:34:57.358651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.804 [2024-11-26 13:34:57.358660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.804 [2024-11-26 13:34:57.358674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:22:08.804 [2024-11-26 13:34:57.358681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.062 [2024-11-26 13:34:57.372174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.062 [2024-11-26 13:34:57.372223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.062 [2024-11-26 13:34:57.372238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.473 ms 00:22:09.062 [2024-11-26 13:34:57.372246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.062 [2024-11-26 13:34:57.385191] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:09.063 [2024-11-26 13:34:57.385261] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:09.063 [2024-11-26 13:34:57.385275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.385285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:09.063 [2024-11-26 13:34:57.385296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.904 ms 00:22:09.063 [2024-11-26 13:34:57.385304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.410651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.410722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:09.063 [2024-11-26 13:34:57.410736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.283 ms 00:22:09.063 [2024-11-26 13:34:57.410744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.423043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.423104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:09.063 [2024-11-26 13:34:57.423117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.199 ms 00:22:09.063 [2024-11-26 13:34:57.423124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.434976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.435026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:09.063 [2024-11-26 13:34:57.435038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.792 ms 00:22:09.063 [2024-11-26 13:34:57.435046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.435731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.435752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:09.063 [2024-11-26 13:34:57.435765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:22:09.063 [2024-11-26 13:34:57.435773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.493515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.493574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:09.063 [2024-11-26 13:34:57.493596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.723 ms 00:22:09.063 [2024-11-26 13:34:57.493604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.504817] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:09.063 [2024-11-26 13:34:57.507676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.507714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:09.063 [2024-11-26 13:34:57.507727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.006 ms 00:22:09.063 [2024-11-26 13:34:57.507737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.507852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.507863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:09.063 [2024-11-26 13:34:57.507872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:09.063 [2024-11-26 13:34:57.507882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.507948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.507958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:09.063 [2024-11-26 13:34:57.507967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:09.063 [2024-11-26 13:34:57.507974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.507993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.508001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:09.063 [2024-11-26 13:34:57.508009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:09.063 [2024-11-26 13:34:57.508016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.508047] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:09.063 [2024-11-26 13:34:57.508057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.508064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:09.063 [2024-11-26 13:34:57.508072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:09.063 [2024-11-26 13:34:57.508080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.532765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.532815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:09.063 [2024-11-26 13:34:57.532828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.668 ms 00:22:09.063 [2024-11-26 13:34:57.532841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.532937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.063 [2024-11-26 13:34:57.532946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:09.063 [2024-11-26 13:34:57.532956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:09.063 [2024-11-26 13:34:57.532963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.063 [2024-11-26 13:34:57.533951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.129 ms, result 0 00:22:10.436  [2024-11-26T13:34:59.968Z] Copying: 46/1024 [MB] (46 MBps) [2024-11-26T13:35:00.904Z] Copying: 91/1024 [MB] (45 MBps) [2024-11-26T13:35:01.840Z] Copying: 136/1024 [MB] (44 MBps) [2024-11-26T13:35:02.775Z] Copying: 182/1024 [MB] (45 MBps) [2024-11-26T13:35:04.148Z] Copying: 227/1024 [MB] (45 MBps) [2024-11-26T13:35:04.717Z] Copying: 276/1024 [MB] (48 MBps) [2024-11-26T13:35:06.095Z] Copying: 318/1024 [MB] (42 MBps) [2024-11-26T13:35:07.029Z] Copying: 358/1024 [MB] (39 MBps) [2024-11-26T13:35:07.965Z] Copying: 407/1024 [MB] (48 MBps) [2024-11-26T13:35:08.901Z] Copying: 452/1024 [MB] (44 MBps) [2024-11-26T13:35:09.840Z] Copying: 492/1024 [MB] (40 MBps) [2024-11-26T13:35:10.774Z] Copying: 525/1024 [MB] (33 MBps) [2024-11-26T13:35:12.151Z] Copying: 572/1024 [MB] (46 MBps) [2024-11-26T13:35:12.718Z] Copying: 620/1024 [MB] (48 MBps) [2024-11-26T13:35:14.113Z] Copying: 668/1024 [MB] (47 MBps) [2024-11-26T13:35:14.755Z] Copying: 714/1024 [MB] (46 MBps) [2024-11-26T13:35:16.136Z] Copying: 762/1024 [MB] (47 MBps) [2024-11-26T13:35:17.076Z] Copying: 795/1024 [MB] (32 MBps) [2024-11-26T13:35:18.014Z] Copying: 830/1024 [MB] (34 MBps) [2024-11-26T13:35:18.954Z] Copying: 861/1024 [MB] (31 MBps) [2024-11-26T13:35:19.892Z] Copying: 898/1024 [MB] (37 MBps) [2024-11-26T13:35:20.829Z] Copying: 927/1024 [MB] (28 MBps) [2024-11-26T13:35:21.767Z] Copying: 965/1024 [MB] (38 MBps) [2024-11-26T13:35:22.026Z] Copying: 1006/1024 [MB] (41 MBps) [2024-11-26T13:35:23.411Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-11-26 13:35:23.207761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.207834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:34.841 [2024-11-26 13:35:23.207848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:34.841 [2024-11-26 13:35:23.207856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.207878] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:34.841 [2024-11-26 13:35:23.210506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.210550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:34.841 [2024-11-26 13:35:23.210569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:22:34.841 [2024-11-26 13:35:23.210578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.210807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.210823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:34.841 [2024-11-26 13:35:23.210832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:22:34.841 [2024-11-26 13:35:23.210839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.214284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.214317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:34.841 [2024-11-26 13:35:23.214327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:22:34.841 [2024-11-26 13:35:23.214341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.221408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.221463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:34.841 [2024-11-26 13:35:23.221475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.049 ms 00:22:34.841 [2024-11-26 13:35:23.221483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.247939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.248004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:34.841 [2024-11-26 13:35:23.248016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.375 ms 00:22:34.841 [2024-11-26 13:35:23.248025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.263076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.263150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:34.841 [2024-11-26 13:35:23.263165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.016 ms 00:22:34.841 [2024-11-26 13:35:23.263174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.263363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.263374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:34.841 [2024-11-26 13:35:23.263383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:22:34.841 [2024-11-26 13:35:23.263390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.291492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.291551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:34.841 [2024-11-26 13:35:23.291564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.086 ms 00:22:34.841 [2024-11-26 13:35:23.291572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.318066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.318128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:34.841 [2024-11-26 13:35:23.318143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.461 ms 00:22:34.841 [2024-11-26 13:35:23.318152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.342939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.841 [2024-11-26 13:35:23.342998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:34.841 [2024-11-26 13:35:23.343012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.750 ms 00:22:34.841 [2024-11-26 13:35:23.343019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.841 [2024-11-26 13:35:23.367668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.842 [2024-11-26 13:35:23.367720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:34.842 [2024-11-26 13:35:23.367732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.584 ms 00:22:34.842 [2024-11-26 13:35:23.367741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.842 [2024-11-26 13:35:23.367773] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:34.842 [2024-11-26 13:35:23.367797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.367994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:34.842 [2024-11-26 13:35:23.368456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:34.843 [2024-11-26 13:35:23.368572] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:34.843 [2024-11-26 13:35:23.368583] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae6ca256-4021-4c9d-91f8-3274fd083f2c 00:22:34.843 [2024-11-26 13:35:23.368591] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:34.843 [2024-11-26 13:35:23.368598] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:34.843 [2024-11-26 13:35:23.368606] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:34.843 [2024-11-26 13:35:23.368614] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:34.843 [2024-11-26 13:35:23.368628] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:34.843 [2024-11-26 13:35:23.368636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:34.843 [2024-11-26 13:35:23.368644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:34.843 [2024-11-26 13:35:23.368651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:34.843 [2024-11-26 13:35:23.368657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:34.843 [2024-11-26 13:35:23.368664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.843 [2024-11-26 13:35:23.368671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:34.843 [2024-11-26 13:35:23.368680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:22:34.843 [2024-11-26 13:35:23.368687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.843 [2024-11-26 13:35:23.381392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.843 [2024-11-26 13:35:23.381453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:34.843 [2024-11-26 13:35:23.381467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.682 ms 00:22:34.843 [2024-11-26 13:35:23.381477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.843 [2024-11-26 13:35:23.381850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.843 [2024-11-26 13:35:23.381867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:34.843 [2024-11-26 13:35:23.381880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:22:34.843 [2024-11-26 13:35:23.381887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.414969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.415023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:35.105 [2024-11-26 13:35:23.415034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.415041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.415108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.415116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:35.105 [2024-11-26 13:35:23.415129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.415154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.415239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.415250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:35.105 [2024-11-26 13:35:23.415258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.415265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.415280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.415288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:35.105 [2024-11-26 13:35:23.415296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.415306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.494228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.494289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:35.105 [2024-11-26 13:35:23.494302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.494309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.559363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.559426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:35.105 [2024-11-26 13:35:23.559438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.559469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.559521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.559530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:35.105 [2024-11-26 13:35:23.559538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.559545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.559592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.559600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:35.105 [2024-11-26 13:35:23.559608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.559616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.559770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.559780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:35.105 [2024-11-26 13:35:23.559788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.559795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.559823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.559831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:35.105 [2024-11-26 13:35:23.559839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.559846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.559883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.559891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:35.105 [2024-11-26 13:35:23.559899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.559906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.559945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.105 [2024-11-26 13:35:23.559954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:35.105 [2024-11-26 13:35:23.559962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.105 [2024-11-26 13:35:23.559970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.105 [2024-11-26 13:35:23.560088] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.295 ms, result 0 00:22:36.049 00:22:36.049 00:22:36.049 13:35:24 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:37.962 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:37.962 13:35:26 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:37.962 [2024-11-26 13:35:26.396429] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:22:37.962 [2024-11-26 13:35:26.396596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78445 ] 00:22:38.224 [2024-11-26 13:35:26.558797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.224 [2024-11-26 13:35:26.676994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.485 [2024-11-26 13:35:26.960565] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:38.485 [2024-11-26 13:35:26.960633] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:38.748 [2024-11-26 13:35:27.115597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.748 [2024-11-26 13:35:27.115653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:38.748 [2024-11-26 13:35:27.115668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:38.748 [2024-11-26 13:35:27.115676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.748 [2024-11-26 13:35:27.115721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.748 [2024-11-26 13:35:27.115733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:38.748 [2024-11-26 13:35:27.115741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:38.748 [2024-11-26 13:35:27.115749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.748 [2024-11-26 13:35:27.115769] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:38.748 [2024-11-26 13:35:27.116497] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:38.748 [2024-11-26 13:35:27.116513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.748 [2024-11-26 13:35:27.116521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:38.748 [2024-11-26 13:35:27.116529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:22:38.748 [2024-11-26 13:35:27.116537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.748 [2024-11-26 13:35:27.117903] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:38.748 [2024-11-26 13:35:27.131798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.748 [2024-11-26 13:35:27.131836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:38.748 [2024-11-26 13:35:27.131849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.897 ms 00:22:38.748 [2024-11-26 13:35:27.131858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.748 [2024-11-26 13:35:27.131917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.748 [2024-11-26 13:35:27.131927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:38.748 [2024-11-26 13:35:27.131936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:38.748 [2024-11-26 13:35:27.131943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.748 [2024-11-26 13:35:27.138695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.748 [2024-11-26 13:35:27.138729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:38.748 [2024-11-26 13:35:27.138739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.687 ms 00:22:38.748 [2024-11-26 13:35:27.138752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.748 [2024-11-26 13:35:27.138828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.748 [2024-11-26 13:35:27.138838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:38.748 [2024-11-26 13:35:27.138847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:38.748 [2024-11-26 13:35:27.138855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.748 [2024-11-26 13:35:27.138891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.748 [2024-11-26 13:35:27.138901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:38.748 [2024-11-26 13:35:27.138909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:38.748 [2024-11-26 13:35:27.138917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.748 [2024-11-26 13:35:27.138943] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:38.748 [2024-11-26 13:35:27.142673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.748 [2024-11-26 13:35:27.142703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:38.748 [2024-11-26 13:35:27.142716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.736 ms 00:22:38.748 [2024-11-26 13:35:27.142724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.748 [2024-11-26 13:35:27.142754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.748 [2024-11-26 13:35:27.142763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:38.748 [2024-11-26 13:35:27.142772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:38.748 [2024-11-26 13:35:27.142779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.748 [2024-11-26 13:35:27.142811] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:38.748 [2024-11-26 13:35:27.142832] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:38.748 [2024-11-26 13:35:27.142867] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:38.748 [2024-11-26 13:35:27.142886] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:38.748 [2024-11-26 13:35:27.142989] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:38.748 [2024-11-26 13:35:27.143000] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:38.748 [2024-11-26 13:35:27.143012] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:38.749 [2024-11-26 13:35:27.143022] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143032] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143041] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:38.749 [2024-11-26 13:35:27.143048] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:38.749 [2024-11-26 13:35:27.143055] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:38.749 [2024-11-26 13:35:27.143066] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:38.749 [2024-11-26 13:35:27.143075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.749 [2024-11-26 13:35:27.143083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:38.749 [2024-11-26 13:35:27.143090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:22:38.749 [2024-11-26 13:35:27.143098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.749 [2024-11-26 13:35:27.143200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.749 [2024-11-26 13:35:27.143211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:38.749 [2024-11-26 13:35:27.143219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:38.749 [2024-11-26 13:35:27.143226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.749 [2024-11-26 13:35:27.143331] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:38.749 [2024-11-26 13:35:27.143342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:38.749 [2024-11-26 13:35:27.143350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:38.749 [2024-11-26 13:35:27.143375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:38.749 [2024-11-26 13:35:27.143397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:38.749 [2024-11-26 13:35:27.143411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:38.749 [2024-11-26 13:35:27.143418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:38.749 [2024-11-26 13:35:27.143426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:38.749 [2024-11-26 13:35:27.143438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:38.749 [2024-11-26 13:35:27.143458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:38.749 [2024-11-26 13:35:27.143464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:38.749 [2024-11-26 13:35:27.143478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:38.749 [2024-11-26 13:35:27.143519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:38.749 [2024-11-26 13:35:27.143540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:38.749 [2024-11-26 13:35:27.143566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:38.749 [2024-11-26 13:35:27.143587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:38.749 [2024-11-26 13:35:27.143607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:38.749 [2024-11-26 13:35:27.143623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:38.749 [2024-11-26 13:35:27.143634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:38.749 [2024-11-26 13:35:27.143644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:38.749 [2024-11-26 13:35:27.143655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:38.749 [2024-11-26 13:35:27.143665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:38.749 [2024-11-26 13:35:27.143672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:38.749 [2024-11-26 13:35:27.143686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:38.749 [2024-11-26 13:35:27.143693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143699] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:38.749 [2024-11-26 13:35:27.143707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:38.749 [2024-11-26 13:35:27.143715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.749 [2024-11-26 13:35:27.143730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:38.749 [2024-11-26 13:35:27.143736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:38.749 [2024-11-26 13:35:27.143743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:38.749 [2024-11-26 13:35:27.143752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:38.749 [2024-11-26 13:35:27.143759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:38.749 [2024-11-26 13:35:27.143767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:38.749 [2024-11-26 13:35:27.143775] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:38.749 [2024-11-26 13:35:27.143784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:38.749 [2024-11-26 13:35:27.143795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:38.749 [2024-11-26 13:35:27.143802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:38.749 [2024-11-26 13:35:27.143810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:38.749 [2024-11-26 13:35:27.143817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:38.749 [2024-11-26 13:35:27.143825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:38.749 [2024-11-26 13:35:27.143831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:38.749 [2024-11-26 13:35:27.143838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:38.749 [2024-11-26 13:35:27.143845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:38.749 [2024-11-26 13:35:27.143853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:38.750 [2024-11-26 13:35:27.143860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:38.750 [2024-11-26 13:35:27.143867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:38.750 [2024-11-26 13:35:27.143876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:38.750 [2024-11-26 13:35:27.143884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:38.750 [2024-11-26 13:35:27.143892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:38.750 [2024-11-26 13:35:27.143900] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:38.750 [2024-11-26 13:35:27.143908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:38.750 [2024-11-26 13:35:27.143916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:38.750 [2024-11-26 13:35:27.143923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:38.750 [2024-11-26 13:35:27.143931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:38.750 [2024-11-26 13:35:27.143937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:38.750 [2024-11-26 13:35:27.143945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.143954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:38.750 [2024-11-26 13:35:27.143961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:22:38.750 [2024-11-26 13:35:27.143967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.174733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.174920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:38.750 [2024-11-26 13:35:27.174939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.722 ms 00:22:38.750 [2024-11-26 13:35:27.174953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.175043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.175052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:38.750 [2024-11-26 13:35:27.175061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:38.750 [2024-11-26 13:35:27.175068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.218450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.218490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:38.750 [2024-11-26 13:35:27.218502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.326 ms 00:22:38.750 [2024-11-26 13:35:27.218511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.218553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.218563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:38.750 [2024-11-26 13:35:27.218577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:38.750 [2024-11-26 13:35:27.218584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.219049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.219067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:38.750 [2024-11-26 13:35:27.219077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:22:38.750 [2024-11-26 13:35:27.219086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.219234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.219245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:38.750 [2024-11-26 13:35:27.219259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:22:38.750 [2024-11-26 13:35:27.219267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.233657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.233690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:38.750 [2024-11-26 13:35:27.233703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.371 ms 00:22:38.750 [2024-11-26 13:35:27.233711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.246955] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:38.750 [2024-11-26 13:35:27.246989] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:38.750 [2024-11-26 13:35:27.247001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.247010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:38.750 [2024-11-26 13:35:27.247019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.196 ms 00:22:38.750 [2024-11-26 13:35:27.247028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.271943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.271979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:38.750 [2024-11-26 13:35:27.271990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.875 ms 00:22:38.750 [2024-11-26 13:35:27.271998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.284065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.284097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:38.750 [2024-11-26 13:35:27.284108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.022 ms 00:22:38.750 [2024-11-26 13:35:27.284117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.295779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.295927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:38.750 [2024-11-26 13:35:27.295945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.627 ms 00:22:38.750 [2024-11-26 13:35:27.295953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.750 [2024-11-26 13:35:27.296637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.750 [2024-11-26 13:35:27.296659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:38.750 [2024-11-26 13:35:27.296673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:22:38.750 [2024-11-26 13:35:27.296681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.011 [2024-11-26 13:35:27.359419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.011 [2024-11-26 13:35:27.359497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:39.011 [2024-11-26 13:35:27.359516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.717 ms 00:22:39.011 [2024-11-26 13:35:27.359524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.011 [2024-11-26 13:35:27.371009] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:39.011 [2024-11-26 13:35:27.374498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.011 [2024-11-26 13:35:27.374534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:39.011 [2024-11-26 13:35:27.374548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.912 ms 00:22:39.011 [2024-11-26 13:35:27.374556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.011 [2024-11-26 13:35:27.374701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.011 [2024-11-26 13:35:27.374714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:39.011 [2024-11-26 13:35:27.374723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:39.011 [2024-11-26 13:35:27.374734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.011 [2024-11-26 13:35:27.374807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.011 [2024-11-26 13:35:27.374818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:39.011 [2024-11-26 13:35:27.374826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:39.011 [2024-11-26 13:35:27.374835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.011 [2024-11-26 13:35:27.374856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.011 [2024-11-26 13:35:27.374865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:39.011 [2024-11-26 13:35:27.374874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:39.011 [2024-11-26 13:35:27.374881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.011 [2024-11-26 13:35:27.374920] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:39.011 [2024-11-26 13:35:27.374930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.012 [2024-11-26 13:35:27.374939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:39.012 [2024-11-26 13:35:27.374948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:39.012 [2024-11-26 13:35:27.374956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.012 [2024-11-26 13:35:27.399969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.012 [2024-11-26 13:35:27.400016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:39.012 [2024-11-26 13:35:27.400031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.993 ms 00:22:39.012 [2024-11-26 13:35:27.400044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.012 [2024-11-26 13:35:27.400130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.012 [2024-11-26 13:35:27.400141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:39.012 [2024-11-26 13:35:27.400150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:39.012 [2024-11-26 13:35:27.400159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.012 [2024-11-26 13:35:27.401250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 285.175 ms, result 0 00:22:39.956  [2024-11-26T13:35:29.470Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-26T13:35:30.858Z] Copying: 44/1024 [MB] (22 MBps) [2024-11-26T13:35:31.430Z] Copying: 68/1024 [MB] (24 MBps) [2024-11-26T13:35:32.816Z] Copying: 86/1024 [MB] (17 MBps) [2024-11-26T13:35:33.758Z] Copying: 102/1024 [MB] (16 MBps) [2024-11-26T13:35:34.774Z] Copying: 116/1024 [MB] (14 MBps) [2024-11-26T13:35:35.753Z] Copying: 134/1024 [MB] (18 MBps) [2024-11-26T13:35:36.696Z] Copying: 155/1024 [MB] (20 MBps) [2024-11-26T13:35:37.641Z] Copying: 183/1024 [MB] (27 MBps) [2024-11-26T13:35:38.585Z] Copying: 212/1024 [MB] (29 MBps) [2024-11-26T13:35:39.526Z] Copying: 250/1024 [MB] (38 MBps) [2024-11-26T13:35:40.471Z] Copying: 287/1024 [MB] (36 MBps) [2024-11-26T13:35:41.859Z] Copying: 324/1024 [MB] (36 MBps) [2024-11-26T13:35:42.431Z] Copying: 363/1024 [MB] (39 MBps) [2024-11-26T13:35:43.817Z] Copying: 388/1024 [MB] (24 MBps) [2024-11-26T13:35:44.759Z] Copying: 429/1024 [MB] (41 MBps) [2024-11-26T13:35:45.702Z] Copying: 460/1024 [MB] (31 MBps) [2024-11-26T13:35:46.640Z] Copying: 493/1024 [MB] (33 MBps) [2024-11-26T13:35:47.584Z] Copying: 529/1024 [MB] (35 MBps) [2024-11-26T13:35:48.528Z] Copying: 571/1024 [MB] (42 MBps) [2024-11-26T13:35:49.481Z] Copying: 596/1024 [MB] (25 MBps) [2024-11-26T13:35:50.424Z] Copying: 617/1024 [MB] (21 MBps) [2024-11-26T13:35:51.851Z] Copying: 640/1024 [MB] (22 MBps) [2024-11-26T13:35:52.419Z] Copying: 655/1024 [MB] (15 MBps) [2024-11-26T13:35:53.800Z] Copying: 672/1024 [MB] (16 MBps) [2024-11-26T13:35:54.742Z] Copying: 699/1024 [MB] (27 MBps) [2024-11-26T13:35:55.686Z] Copying: 743/1024 [MB] (43 MBps) [2024-11-26T13:35:56.629Z] Copying: 765/1024 [MB] (21 MBps) [2024-11-26T13:35:57.573Z] Copying: 783/1024 [MB] (17 MBps) [2024-11-26T13:35:58.512Z] Copying: 801/1024 [MB] (18 MBps) [2024-11-26T13:35:59.462Z] Copying: 839/1024 [MB] (37 MBps) [2024-11-26T13:36:00.849Z] Copying: 855/1024 [MB] (16 MBps) [2024-11-26T13:36:01.421Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-26T13:36:02.808Z] Copying: 883/1024 [MB] (17 MBps) [2024-11-26T13:36:03.752Z] Copying: 899/1024 [MB] (15 MBps) [2024-11-26T13:36:04.696Z] Copying: 916/1024 [MB] (16 MBps) [2024-11-26T13:36:05.655Z] Copying: 927/1024 [MB] (11 MBps) [2024-11-26T13:36:06.656Z] Copying: 945/1024 [MB] (17 MBps) [2024-11-26T13:36:07.594Z] Copying: 968/1024 [MB] (23 MBps) [2024-11-26T13:36:08.539Z] Copying: 986/1024 [MB] (18 MBps) [2024-11-26T13:36:09.483Z] Copying: 1000/1024 [MB] (13 MBps) [2024-11-26T13:36:10.426Z] Copying: 1012/1024 [MB] (11 MBps) [2024-11-26T13:36:11.812Z] Copying: 1046048/1048576 [kB] (9688 kBps) [2024-11-26T13:36:11.812Z] Copying: 1048484/1048576 [kB] (2436 kBps) [2024-11-26T13:36:11.812Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-26 13:36:11.483628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.242 [2024-11-26 13:36:11.483687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:23.242 [2024-11-26 13:36:11.483714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:23.242 [2024-11-26 13:36:11.483724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.242 [2024-11-26 13:36:11.483749] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:23.242 [2024-11-26 13:36:11.486798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.242 [2024-11-26 13:36:11.486843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:23.242 [2024-11-26 13:36:11.486858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.031 ms 00:23:23.242 [2024-11-26 13:36:11.486867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.242 [2024-11-26 13:36:11.499006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.242 [2024-11-26 13:36:11.499058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:23.242 [2024-11-26 13:36:11.499071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.894 ms 00:23:23.242 [2024-11-26 13:36:11.499087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.242 [2024-11-26 13:36:11.526095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.242 [2024-11-26 13:36:11.526293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:23.242 [2024-11-26 13:36:11.526317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.988 ms 00:23:23.242 [2024-11-26 13:36:11.526327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.242 [2024-11-26 13:36:11.532553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.242 [2024-11-26 13:36:11.532710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:23.242 [2024-11-26 13:36:11.532731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.187 ms 00:23:23.242 [2024-11-26 13:36:11.532741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.242 [2024-11-26 13:36:11.560297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.242 [2024-11-26 13:36:11.560502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:23.242 [2024-11-26 13:36:11.560525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.495 ms 00:23:23.242 [2024-11-26 13:36:11.560533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.242 [2024-11-26 13:36:11.577282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.242 [2024-11-26 13:36:11.577335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:23.242 [2024-11-26 13:36:11.577348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.706 ms 00:23:23.242 [2024-11-26 13:36:11.577357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.505 [2024-11-26 13:36:11.874513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.505 [2024-11-26 13:36:11.874609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:23.505 [2024-11-26 13:36:11.874626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 297.099 ms 00:23:23.505 [2024-11-26 13:36:11.874636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.505 [2024-11-26 13:36:11.902501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.505 [2024-11-26 13:36:11.902556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:23.505 [2024-11-26 13:36:11.902569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.847 ms 00:23:23.505 [2024-11-26 13:36:11.902578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.505 [2024-11-26 13:36:11.928137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.505 [2024-11-26 13:36:11.928339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:23.505 [2024-11-26 13:36:11.928360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.511 ms 00:23:23.505 [2024-11-26 13:36:11.928368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.505 [2024-11-26 13:36:11.953008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.505 [2024-11-26 13:36:11.953068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:23.505 [2024-11-26 13:36:11.953082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.600 ms 00:23:23.505 [2024-11-26 13:36:11.953091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.505 [2024-11-26 13:36:11.977793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.505 [2024-11-26 13:36:11.977842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:23.505 [2024-11-26 13:36:11.977856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.626 ms 00:23:23.505 [2024-11-26 13:36:11.977865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.505 [2024-11-26 13:36:11.977910] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:23.505 [2024-11-26 13:36:11.977927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 93952 / 261120 wr_cnt: 1 state: open 00:23:23.505 [2024-11-26 13:36:11.977939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.977948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.977956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.977965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.977973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.977981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.977989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.977997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:23.505 [2024-11-26 13:36:11.978386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:23.506 [2024-11-26 13:36:11.978805] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:23.506 [2024-11-26 13:36:11.978814] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae6ca256-4021-4c9d-91f8-3274fd083f2c 00:23:23.506 [2024-11-26 13:36:11.978822] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 93952 00:23:23.506 [2024-11-26 13:36:11.978830] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 94912 00:23:23.506 [2024-11-26 13:36:11.978838] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 93952 00:23:23.506 [2024-11-26 13:36:11.978846] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:23:23.506 [2024-11-26 13:36:11.978869] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:23.506 [2024-11-26 13:36:11.978877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:23.506 [2024-11-26 13:36:11.978885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:23.506 [2024-11-26 13:36:11.978892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:23.506 [2024-11-26 13:36:11.978899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:23.506 [2024-11-26 13:36:11.978907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.506 [2024-11-26 13:36:11.978915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:23.506 [2024-11-26 13:36:11.978924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:23:23.506 [2024-11-26 13:36:11.978940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.506 [2024-11-26 13:36:11.992832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.506 [2024-11-26 13:36:11.992877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:23.506 [2024-11-26 13:36:11.992895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.872 ms 00:23:23.506 [2024-11-26 13:36:11.992904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.506 [2024-11-26 13:36:11.993322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.506 [2024-11-26 13:36:11.993334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:23.506 [2024-11-26 13:36:11.993343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:23:23.506 [2024-11-26 13:36:11.993351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.506 [2024-11-26 13:36:12.030052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.506 [2024-11-26 13:36:12.030107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:23.506 [2024-11-26 13:36:12.030119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.506 [2024-11-26 13:36:12.030129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.506 [2024-11-26 13:36:12.030202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.506 [2024-11-26 13:36:12.030213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:23.506 [2024-11-26 13:36:12.030223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.506 [2024-11-26 13:36:12.030232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.506 [2024-11-26 13:36:12.030295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.506 [2024-11-26 13:36:12.030307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:23.506 [2024-11-26 13:36:12.030320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.506 [2024-11-26 13:36:12.030330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.506 [2024-11-26 13:36:12.030348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.506 [2024-11-26 13:36:12.030358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:23.507 [2024-11-26 13:36:12.030367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.507 [2024-11-26 13:36:12.030377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.768 [2024-11-26 13:36:12.116563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.768 [2024-11-26 13:36:12.116806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:23.768 [2024-11-26 13:36:12.116829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.768 [2024-11-26 13:36:12.116838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.768 [2024-11-26 13:36:12.186625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.768 [2024-11-26 13:36:12.186687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:23.768 [2024-11-26 13:36:12.186700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.768 [2024-11-26 13:36:12.186711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.768 [2024-11-26 13:36:12.186796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.768 [2024-11-26 13:36:12.186807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:23.768 [2024-11-26 13:36:12.186816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.768 [2024-11-26 13:36:12.186832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.768 [2024-11-26 13:36:12.186870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.768 [2024-11-26 13:36:12.186881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:23.768 [2024-11-26 13:36:12.186889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.768 [2024-11-26 13:36:12.186898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.768 [2024-11-26 13:36:12.186998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.768 [2024-11-26 13:36:12.187009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:23.768 [2024-11-26 13:36:12.187018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.768 [2024-11-26 13:36:12.187026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.768 [2024-11-26 13:36:12.187065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.768 [2024-11-26 13:36:12.187076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:23.768 [2024-11-26 13:36:12.187084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.768 [2024-11-26 13:36:12.187092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.768 [2024-11-26 13:36:12.187136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.768 [2024-11-26 13:36:12.187147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:23.768 [2024-11-26 13:36:12.187156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.768 [2024-11-26 13:36:12.187164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.768 [2024-11-26 13:36:12.187217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.768 [2024-11-26 13:36:12.187243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:23.768 [2024-11-26 13:36:12.187251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.768 [2024-11-26 13:36:12.187260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.768 [2024-11-26 13:36:12.187399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 703.733 ms, result 0 00:23:25.151 00:23:25.151 00:23:25.151 13:36:13 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:25.151 [2024-11-26 13:36:13.486858] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:23:25.151 [2024-11-26 13:36:13.487734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78924 ] 00:23:25.151 [2024-11-26 13:36:13.660781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.412 [2024-11-26 13:36:13.791789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.673 [2024-11-26 13:36:14.096348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:25.673 [2024-11-26 13:36:14.096460] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:25.937 [2024-11-26 13:36:14.259261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.259335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:25.937 [2024-11-26 13:36:14.259352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:25.937 [2024-11-26 13:36:14.259361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.259420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.259434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:25.937 [2024-11-26 13:36:14.259471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:25.937 [2024-11-26 13:36:14.259480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.259503] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:25.937 [2024-11-26 13:36:14.260331] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:25.937 [2024-11-26 13:36:14.260373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.260382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:25.937 [2024-11-26 13:36:14.260392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:23:25.937 [2024-11-26 13:36:14.260400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.262168] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:25.937 [2024-11-26 13:36:14.276811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.276863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:25.937 [2024-11-26 13:36:14.276877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.644 ms 00:23:25.937 [2024-11-26 13:36:14.276885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.276969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.276980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:25.937 [2024-11-26 13:36:14.276988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:25.937 [2024-11-26 13:36:14.276996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.285457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.285498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:25.937 [2024-11-26 13:36:14.285510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.382 ms 00:23:25.937 [2024-11-26 13:36:14.285524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.285607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.285617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:25.937 [2024-11-26 13:36:14.285626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:25.937 [2024-11-26 13:36:14.285633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.285678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.285689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:25.937 [2024-11-26 13:36:14.285697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:25.937 [2024-11-26 13:36:14.285706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.285734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:25.937 [2024-11-26 13:36:14.289675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.289715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:25.937 [2024-11-26 13:36:14.289728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.948 ms 00:23:25.937 [2024-11-26 13:36:14.289737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.289771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.289780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:25.937 [2024-11-26 13:36:14.289789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:25.937 [2024-11-26 13:36:14.289797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.289849] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:25.937 [2024-11-26 13:36:14.289873] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:25.937 [2024-11-26 13:36:14.289912] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:25.937 [2024-11-26 13:36:14.289932] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:25.937 [2024-11-26 13:36:14.290038] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:25.937 [2024-11-26 13:36:14.290050] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:25.937 [2024-11-26 13:36:14.290061] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:25.937 [2024-11-26 13:36:14.290072] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:25.937 [2024-11-26 13:36:14.290083] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:25.937 [2024-11-26 13:36:14.290092] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:25.937 [2024-11-26 13:36:14.290100] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:25.937 [2024-11-26 13:36:14.290108] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:25.937 [2024-11-26 13:36:14.290119] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:25.937 [2024-11-26 13:36:14.290127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.290135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:25.937 [2024-11-26 13:36:14.290143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:25.937 [2024-11-26 13:36:14.290151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.290234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.937 [2024-11-26 13:36:14.290243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:25.937 [2024-11-26 13:36:14.290250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:25.937 [2024-11-26 13:36:14.290258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.937 [2024-11-26 13:36:14.290366] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:25.937 [2024-11-26 13:36:14.290378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:25.937 [2024-11-26 13:36:14.290386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:25.937 [2024-11-26 13:36:14.290395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.937 [2024-11-26 13:36:14.290404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:25.937 [2024-11-26 13:36:14.290411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:25.937 [2024-11-26 13:36:14.290418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:25.937 [2024-11-26 13:36:14.290426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:25.937 [2024-11-26 13:36:14.290434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:25.937 [2024-11-26 13:36:14.290466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:25.937 [2024-11-26 13:36:14.290474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:25.937 [2024-11-26 13:36:14.290482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:25.937 [2024-11-26 13:36:14.290490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:25.937 [2024-11-26 13:36:14.290505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:25.937 [2024-11-26 13:36:14.290515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:25.937 [2024-11-26 13:36:14.290523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.937 [2024-11-26 13:36:14.290530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:25.937 [2024-11-26 13:36:14.290537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:25.937 [2024-11-26 13:36:14.290545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.937 [2024-11-26 13:36:14.290553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:25.937 [2024-11-26 13:36:14.290561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:25.937 [2024-11-26 13:36:14.290568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.938 [2024-11-26 13:36:14.290574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:25.938 [2024-11-26 13:36:14.290581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:25.938 [2024-11-26 13:36:14.290588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.938 [2024-11-26 13:36:14.290596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:25.938 [2024-11-26 13:36:14.290603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:25.938 [2024-11-26 13:36:14.290612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.938 [2024-11-26 13:36:14.290619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:25.938 [2024-11-26 13:36:14.290627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:25.938 [2024-11-26 13:36:14.290634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.938 [2024-11-26 13:36:14.290641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:25.938 [2024-11-26 13:36:14.290649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:25.938 [2024-11-26 13:36:14.290655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:25.938 [2024-11-26 13:36:14.290662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:25.938 [2024-11-26 13:36:14.290670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:25.938 [2024-11-26 13:36:14.290676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:25.938 [2024-11-26 13:36:14.290683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:25.938 [2024-11-26 13:36:14.290690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:25.938 [2024-11-26 13:36:14.290697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.938 [2024-11-26 13:36:14.290704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:25.938 [2024-11-26 13:36:14.290712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:25.938 [2024-11-26 13:36:14.290719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.938 [2024-11-26 13:36:14.290726] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:25.938 [2024-11-26 13:36:14.290734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:25.938 [2024-11-26 13:36:14.290743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:25.938 [2024-11-26 13:36:14.290750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.938 [2024-11-26 13:36:14.290758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:25.938 [2024-11-26 13:36:14.290765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:25.938 [2024-11-26 13:36:14.290771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:25.938 [2024-11-26 13:36:14.290779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:25.938 [2024-11-26 13:36:14.290786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:25.938 [2024-11-26 13:36:14.290792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:25.938 [2024-11-26 13:36:14.290801] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:25.938 [2024-11-26 13:36:14.290810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.938 [2024-11-26 13:36:14.290821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:25.938 [2024-11-26 13:36:14.290829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:25.938 [2024-11-26 13:36:14.290836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:25.938 [2024-11-26 13:36:14.290844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:25.938 [2024-11-26 13:36:14.290851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:25.938 [2024-11-26 13:36:14.290857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:25.938 [2024-11-26 13:36:14.290865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:25.938 [2024-11-26 13:36:14.290872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:25.938 [2024-11-26 13:36:14.290879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:25.938 [2024-11-26 13:36:14.290887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:25.938 [2024-11-26 13:36:14.290894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:25.938 [2024-11-26 13:36:14.290901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:25.938 [2024-11-26 13:36:14.290908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:25.938 [2024-11-26 13:36:14.290915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:25.938 [2024-11-26 13:36:14.290924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:25.938 [2024-11-26 13:36:14.290933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.938 [2024-11-26 13:36:14.290941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:25.938 [2024-11-26 13:36:14.290949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:25.938 [2024-11-26 13:36:14.290957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:25.938 [2024-11-26 13:36:14.290964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:25.938 [2024-11-26 13:36:14.290971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.290980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:25.938 [2024-11-26 13:36:14.290990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:23:25.938 [2024-11-26 13:36:14.290998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.324103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.324318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:25.938 [2024-11-26 13:36:14.324339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.057 ms 00:23:25.938 [2024-11-26 13:36:14.324356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.324478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.324489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:25.938 [2024-11-26 13:36:14.324500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:23:25.938 [2024-11-26 13:36:14.324508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.371776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.371986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:25.938 [2024-11-26 13:36:14.372010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.199 ms 00:23:25.938 [2024-11-26 13:36:14.372020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.372077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.372089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:25.938 [2024-11-26 13:36:14.372106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:25.938 [2024-11-26 13:36:14.372115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.372764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.372798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:25.938 [2024-11-26 13:36:14.372811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:23:25.938 [2024-11-26 13:36:14.372821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.372982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.372992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:25.938 [2024-11-26 13:36:14.373007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:23:25.938 [2024-11-26 13:36:14.373015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.388991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.389040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:25.938 [2024-11-26 13:36:14.389056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.955 ms 00:23:25.938 [2024-11-26 13:36:14.389064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.403542] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:25.938 [2024-11-26 13:36:14.403591] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:25.938 [2024-11-26 13:36:14.403605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.403614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:25.938 [2024-11-26 13:36:14.403624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.427 ms 00:23:25.938 [2024-11-26 13:36:14.403631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.429648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.429702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:25.938 [2024-11-26 13:36:14.429715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.956 ms 00:23:25.938 [2024-11-26 13:36:14.429723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.442963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.938 [2024-11-26 13:36:14.443156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:25.938 [2024-11-26 13:36:14.443177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.179 ms 00:23:25.938 [2024-11-26 13:36:14.443185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.938 [2024-11-26 13:36:14.456365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.939 [2024-11-26 13:36:14.456424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:25.939 [2024-11-26 13:36:14.456457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.811 ms 00:23:25.939 [2024-11-26 13:36:14.456467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.939 [2024-11-26 13:36:14.457144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.939 [2024-11-26 13:36:14.457179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:25.939 [2024-11-26 13:36:14.457193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:23:25.939 [2024-11-26 13:36:14.457202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.200 [2024-11-26 13:36:14.522873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.200 [2024-11-26 13:36:14.522950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:26.200 [2024-11-26 13:36:14.522975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.649 ms 00:23:26.201 [2024-11-26 13:36:14.522985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.201 [2024-11-26 13:36:14.534424] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:26.201 [2024-11-26 13:36:14.537647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.201 [2024-11-26 13:36:14.537692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:26.201 [2024-11-26 13:36:14.537707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.600 ms 00:23:26.201 [2024-11-26 13:36:14.537716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.201 [2024-11-26 13:36:14.537809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.201 [2024-11-26 13:36:14.537820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:26.201 [2024-11-26 13:36:14.537830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:26.201 [2024-11-26 13:36:14.537842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.201 [2024-11-26 13:36:14.539614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.201 [2024-11-26 13:36:14.539658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:26.201 [2024-11-26 13:36:14.539670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:23:26.201 [2024-11-26 13:36:14.539679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.201 [2024-11-26 13:36:14.539708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.201 [2024-11-26 13:36:14.539717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:26.201 [2024-11-26 13:36:14.539726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:26.201 [2024-11-26 13:36:14.539735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.201 [2024-11-26 13:36:14.539781] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:26.201 [2024-11-26 13:36:14.539792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.201 [2024-11-26 13:36:14.539801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:26.201 [2024-11-26 13:36:14.539810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:26.201 [2024-11-26 13:36:14.539818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.201 [2024-11-26 13:36:14.566423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.201 [2024-11-26 13:36:14.566492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:26.201 [2024-11-26 13:36:14.566506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.582 ms 00:23:26.201 [2024-11-26 13:36:14.566523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.201 [2024-11-26 13:36:14.566621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.201 [2024-11-26 13:36:14.566631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:26.201 [2024-11-26 13:36:14.566641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:26.201 [2024-11-26 13:36:14.566648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.201 [2024-11-26 13:36:14.567980] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 308.188 ms, result 0 00:23:27.577  [2024-11-26T13:36:17.081Z] Copying: 7136/1048576 [kB] (7136 kBps) [2024-11-26T13:36:18.013Z] Copying: 17/1024 [MB] (10 MBps) [2024-11-26T13:36:18.948Z] Copying: 29/1024 [MB] (11 MBps) [2024-11-26T13:36:19.881Z] Copying: 40/1024 [MB] (11 MBps) [2024-11-26T13:36:20.814Z] Copying: 51/1024 [MB] (10 MBps) [2024-11-26T13:36:22.186Z] Copying: 62/1024 [MB] (11 MBps) [2024-11-26T13:36:23.119Z] Copying: 73/1024 [MB] (10 MBps) [2024-11-26T13:36:24.053Z] Copying: 84/1024 [MB] (11 MBps) [2024-11-26T13:36:24.987Z] Copying: 96/1024 [MB] (11 MBps) [2024-11-26T13:36:25.921Z] Copying: 107/1024 [MB] (11 MBps) [2024-11-26T13:36:26.857Z] Copying: 118/1024 [MB] (10 MBps) [2024-11-26T13:36:27.792Z] Copying: 129/1024 [MB] (10 MBps) [2024-11-26T13:36:29.168Z] Copying: 139/1024 [MB] (10 MBps) [2024-11-26T13:36:30.116Z] Copying: 156/1024 [MB] (17 MBps) [2024-11-26T13:36:31.059Z] Copying: 167/1024 [MB] (10 MBps) [2024-11-26T13:36:32.002Z] Copying: 178/1024 [MB] (10 MBps) [2024-11-26T13:36:32.941Z] Copying: 188/1024 [MB] (10 MBps) [2024-11-26T13:36:33.883Z] Copying: 199/1024 [MB] (11 MBps) [2024-11-26T13:36:34.858Z] Copying: 211/1024 [MB] (11 MBps) [2024-11-26T13:36:35.799Z] Copying: 221/1024 [MB] (10 MBps) [2024-11-26T13:36:37.179Z] Copying: 233/1024 [MB] (11 MBps) [2024-11-26T13:36:38.112Z] Copying: 244/1024 [MB] (10 MBps) [2024-11-26T13:36:39.048Z] Copying: 257/1024 [MB] (13 MBps) [2024-11-26T13:36:39.984Z] Copying: 269/1024 [MB] (11 MBps) [2024-11-26T13:36:40.919Z] Copying: 281/1024 [MB] (11 MBps) [2024-11-26T13:36:41.855Z] Copying: 292/1024 [MB] (11 MBps) [2024-11-26T13:36:42.789Z] Copying: 305/1024 [MB] (12 MBps) [2024-11-26T13:36:44.166Z] Copying: 317/1024 [MB] (12 MBps) [2024-11-26T13:36:45.103Z] Copying: 329/1024 [MB] (11 MBps) [2024-11-26T13:36:46.040Z] Copying: 342/1024 [MB] (12 MBps) [2024-11-26T13:36:46.980Z] Copying: 354/1024 [MB] (11 MBps) [2024-11-26T13:36:47.923Z] Copying: 365/1024 [MB] (11 MBps) [2024-11-26T13:36:48.858Z] Copying: 377/1024 [MB] (11 MBps) [2024-11-26T13:36:49.790Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-26T13:36:51.164Z] Copying: 400/1024 [MB] (11 MBps) [2024-11-26T13:36:52.098Z] Copying: 411/1024 [MB] (11 MBps) [2024-11-26T13:36:53.033Z] Copying: 422/1024 [MB] (11 MBps) [2024-11-26T13:36:53.966Z] Copying: 433/1024 [MB] (10 MBps) [2024-11-26T13:36:54.898Z] Copying: 444/1024 [MB] (10 MBps) [2024-11-26T13:36:55.830Z] Copying: 455/1024 [MB] (11 MBps) [2024-11-26T13:36:56.763Z] Copying: 466/1024 [MB] (11 MBps) [2024-11-26T13:36:58.136Z] Copying: 478/1024 [MB] (11 MBps) [2024-11-26T13:36:59.070Z] Copying: 489/1024 [MB] (11 MBps) [2024-11-26T13:37:00.003Z] Copying: 501/1024 [MB] (11 MBps) [2024-11-26T13:37:00.936Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-26T13:37:01.875Z] Copying: 523/1024 [MB] (11 MBps) [2024-11-26T13:37:02.815Z] Copying: 535/1024 [MB] (11 MBps) [2024-11-26T13:37:04.192Z] Copying: 546/1024 [MB] (11 MBps) [2024-11-26T13:37:05.126Z] Copying: 557/1024 [MB] (10 MBps) [2024-11-26T13:37:06.090Z] Copying: 568/1024 [MB] (11 MBps) [2024-11-26T13:37:07.019Z] Copying: 579/1024 [MB] (11 MBps) [2024-11-26T13:37:07.954Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-26T13:37:08.890Z] Copying: 602/1024 [MB] (11 MBps) [2024-11-26T13:37:09.822Z] Copying: 629/1024 [MB] (26 MBps) [2024-11-26T13:37:11.196Z] Copying: 652/1024 [MB] (23 MBps) [2024-11-26T13:37:11.762Z] Copying: 670/1024 [MB] (18 MBps) [2024-11-26T13:37:13.134Z] Copying: 691/1024 [MB] (21 MBps) [2024-11-26T13:37:14.067Z] Copying: 710/1024 [MB] (18 MBps) [2024-11-26T13:37:14.999Z] Copying: 732/1024 [MB] (21 MBps) [2024-11-26T13:37:15.934Z] Copying: 753/1024 [MB] (21 MBps) [2024-11-26T13:37:16.867Z] Copying: 774/1024 [MB] (20 MBps) [2024-11-26T13:37:17.801Z] Copying: 796/1024 [MB] (22 MBps) [2024-11-26T13:37:19.175Z] Copying: 814/1024 [MB] (18 MBps) [2024-11-26T13:37:20.111Z] Copying: 836/1024 [MB] (21 MBps) [2024-11-26T13:37:21.044Z] Copying: 860/1024 [MB] (23 MBps) [2024-11-26T13:37:21.978Z] Copying: 873/1024 [MB] (12 MBps) [2024-11-26T13:37:22.913Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-26T13:37:23.849Z] Copying: 916/1024 [MB] (31 MBps) [2024-11-26T13:37:24.784Z] Copying: 965/1024 [MB] (49 MBps) [2024-11-26T13:37:25.042Z] Copying: 1017/1024 [MB] (52 MBps) [2024-11-26T13:37:25.301Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-26 13:37:25.100387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.731 [2024-11-26 13:37:25.100471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:36.731 [2024-11-26 13:37:25.100486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:36.731 [2024-11-26 13:37:25.100499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.731 [2024-11-26 13:37:25.100521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:36.731 [2024-11-26 13:37:25.105972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.731 [2024-11-26 13:37:25.106025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:36.731 [2024-11-26 13:37:25.106043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.434 ms 00:24:36.731 [2024-11-26 13:37:25.106057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.731 [2024-11-26 13:37:25.106478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.731 [2024-11-26 13:37:25.106503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:36.731 [2024-11-26 13:37:25.106518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:24:36.731 [2024-11-26 13:37:25.106537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.731 [2024-11-26 13:37:25.115998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.731 [2024-11-26 13:37:25.116028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:36.731 [2024-11-26 13:37:25.116038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.435 ms 00:24:36.731 [2024-11-26 13:37:25.116045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.731 [2024-11-26 13:37:25.122471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.731 [2024-11-26 13:37:25.122506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:36.731 [2024-11-26 13:37:25.122516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.395 ms 00:24:36.731 [2024-11-26 13:37:25.122526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.731 [2024-11-26 13:37:25.156731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.731 [2024-11-26 13:37:25.156941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:36.731 [2024-11-26 13:37:25.156963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.159 ms 00:24:36.731 [2024-11-26 13:37:25.156975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.731 [2024-11-26 13:37:25.178301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.731 [2024-11-26 13:37:25.178372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:36.731 [2024-11-26 13:37:25.178390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.280 ms 00:24:36.731 [2024-11-26 13:37:25.178402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.731 [2024-11-26 13:37:25.239745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.731 [2024-11-26 13:37:25.239931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:36.731 [2024-11-26 13:37:25.239956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.244 ms 00:24:36.731 [2024-11-26 13:37:25.239969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.731 [2024-11-26 13:37:25.276380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.731 [2024-11-26 13:37:25.276467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:36.731 [2024-11-26 13:37:25.276485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.386 ms 00:24:36.731 [2024-11-26 13:37:25.276497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.993 [2024-11-26 13:37:25.312675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.993 [2024-11-26 13:37:25.312734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:36.993 [2024-11-26 13:37:25.312751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.131 ms 00:24:36.993 [2024-11-26 13:37:25.312764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.993 [2024-11-26 13:37:25.349498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.994 [2024-11-26 13:37:25.349596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:36.994 [2024-11-26 13:37:25.349626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.676 ms 00:24:36.994 [2024-11-26 13:37:25.349646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.994 [2024-11-26 13:37:25.378723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.994 [2024-11-26 13:37:25.378910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:36.994 [2024-11-26 13:37:25.378927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.875 ms 00:24:36.994 [2024-11-26 13:37:25.378935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.994 [2024-11-26 13:37:25.378966] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:36.994 [2024-11-26 13:37:25.378980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:36.994 [2024-11-26 13:37:25.378990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.378998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:36.994 [2024-11-26 13:37:25.379639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:36.995 [2024-11-26 13:37:25.379781] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:36.995 [2024-11-26 13:37:25.379789] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae6ca256-4021-4c9d-91f8-3274fd083f2c 00:24:36.995 [2024-11-26 13:37:25.379797] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:36.995 [2024-11-26 13:37:25.379803] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 38080 00:24:36.995 [2024-11-26 13:37:25.379810] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 37120 00:24:36.995 [2024-11-26 13:37:25.379818] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0259 00:24:36.995 [2024-11-26 13:37:25.379825] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:36.995 [2024-11-26 13:37:25.379840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:36.995 [2024-11-26 13:37:25.379847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:36.995 [2024-11-26 13:37:25.379853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:36.995 [2024-11-26 13:37:25.379860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:36.995 [2024-11-26 13:37:25.379867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.995 [2024-11-26 13:37:25.379875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:36.995 [2024-11-26 13:37:25.379883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:24:36.995 [2024-11-26 13:37:25.379891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.995 [2024-11-26 13:37:25.392123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.995 [2024-11-26 13:37:25.392161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:36.995 [2024-11-26 13:37:25.392172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.215 ms 00:24:36.995 [2024-11-26 13:37:25.392184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.995 [2024-11-26 13:37:25.392559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.995 [2024-11-26 13:37:25.392570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:36.995 [2024-11-26 13:37:25.392579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:24:36.995 [2024-11-26 13:37:25.392587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.995 [2024-11-26 13:37:25.425002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.995 [2024-11-26 13:37:25.425048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.995 [2024-11-26 13:37:25.425059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.995 [2024-11-26 13:37:25.425067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.995 [2024-11-26 13:37:25.425130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.995 [2024-11-26 13:37:25.425139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.995 [2024-11-26 13:37:25.425148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.995 [2024-11-26 13:37:25.425156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.995 [2024-11-26 13:37:25.425233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.995 [2024-11-26 13:37:25.425244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.995 [2024-11-26 13:37:25.425257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.995 [2024-11-26 13:37:25.425265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.995 [2024-11-26 13:37:25.425281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.995 [2024-11-26 13:37:25.425290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.995 [2024-11-26 13:37:25.425298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.995 [2024-11-26 13:37:25.425306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.995 [2024-11-26 13:37:25.502770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.995 [2024-11-26 13:37:25.502821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.995 [2024-11-26 13:37:25.502833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.995 [2024-11-26 13:37:25.502841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.257 [2024-11-26 13:37:25.565812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.257 [2024-11-26 13:37:25.565987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:37.257 [2024-11-26 13:37:25.566003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.257 [2024-11-26 13:37:25.566011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.257 [2024-11-26 13:37:25.566061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.257 [2024-11-26 13:37:25.566070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:37.257 [2024-11-26 13:37:25.566078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.257 [2024-11-26 13:37:25.566090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.257 [2024-11-26 13:37:25.566140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.257 [2024-11-26 13:37:25.566149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:37.257 [2024-11-26 13:37:25.566157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.257 [2024-11-26 13:37:25.566164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.257 [2024-11-26 13:37:25.566250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.257 [2024-11-26 13:37:25.566259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:37.257 [2024-11-26 13:37:25.566267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.257 [2024-11-26 13:37:25.566274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.257 [2024-11-26 13:37:25.566304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.257 [2024-11-26 13:37:25.566313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:37.257 [2024-11-26 13:37:25.566320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.257 [2024-11-26 13:37:25.566328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.257 [2024-11-26 13:37:25.566361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.257 [2024-11-26 13:37:25.566369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:37.257 [2024-11-26 13:37:25.566377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.257 [2024-11-26 13:37:25.566384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.257 [2024-11-26 13:37:25.566428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.257 [2024-11-26 13:37:25.566438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:37.257 [2024-11-26 13:37:25.566472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.257 [2024-11-26 13:37:25.566479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.257 [2024-11-26 13:37:25.566589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 466.176 ms, result 0 00:24:37.823 00:24:37.823 00:24:37.823 13:37:26 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:40.448 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:40.448 Process with pid 77219 is not found 00:24:40.448 Remove shared memory files 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77219 00:24:40.448 13:37:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77219 ']' 00:24:40.448 13:37:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77219 00:24:40.448 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77219) - No such process 00:24:40.448 13:37:28 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77219 is not found' 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:40.448 13:37:28 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:40.448 ************************************ 00:24:40.448 END TEST ftl_restore 00:24:40.448 ************************************ 00:24:40.448 00:24:40.448 real 4m4.835s 00:24:40.448 user 3m52.548s 00:24:40.448 sys 0m12.212s 00:24:40.448 13:37:28 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.448 13:37:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:40.448 13:37:28 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:40.448 13:37:28 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:40.448 13:37:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.448 13:37:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:40.448 ************************************ 00:24:40.448 START TEST ftl_dirty_shutdown 00:24:40.448 ************************************ 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:40.448 * Looking for test storage... 00:24:40.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:40.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.448 --rc genhtml_branch_coverage=1 00:24:40.448 --rc genhtml_function_coverage=1 00:24:40.448 --rc genhtml_legend=1 00:24:40.448 --rc geninfo_all_blocks=1 00:24:40.448 --rc geninfo_unexecuted_blocks=1 00:24:40.448 00:24:40.448 ' 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:40.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.448 --rc genhtml_branch_coverage=1 00:24:40.448 --rc genhtml_function_coverage=1 00:24:40.448 --rc genhtml_legend=1 00:24:40.448 --rc geninfo_all_blocks=1 00:24:40.448 --rc geninfo_unexecuted_blocks=1 00:24:40.448 00:24:40.448 ' 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:40.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.448 --rc genhtml_branch_coverage=1 00:24:40.448 --rc genhtml_function_coverage=1 00:24:40.448 --rc genhtml_legend=1 00:24:40.448 --rc geninfo_all_blocks=1 00:24:40.448 --rc geninfo_unexecuted_blocks=1 00:24:40.448 00:24:40.448 ' 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:40.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.448 --rc genhtml_branch_coverage=1 00:24:40.448 --rc genhtml_function_coverage=1 00:24:40.448 --rc genhtml_legend=1 00:24:40.448 --rc geninfo_all_blocks=1 00:24:40.448 --rc geninfo_unexecuted_blocks=1 00:24:40.448 00:24:40.448 ' 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:40.448 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79760 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79760 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79760 ']' 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:40.449 13:37:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:40.449 [2024-11-26 13:37:28.775234] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:24:40.449 [2024-11-26 13:37:28.775530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79760 ] 00:24:40.449 [2024-11-26 13:37:28.937241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.726 [2024-11-26 13:37:29.037886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.292 13:37:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.292 13:37:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:41.292 13:37:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:41.292 13:37:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:41.292 13:37:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:41.292 13:37:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:41.292 13:37:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:41.292 13:37:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:41.549 13:37:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:41.549 13:37:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:41.549 13:37:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:41.549 13:37:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:41.549 13:37:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:41.549 13:37:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:41.549 13:37:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:41.549 13:37:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:41.807 13:37:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:41.807 { 00:24:41.807 "name": "nvme0n1", 00:24:41.807 "aliases": [ 00:24:41.808 "dcdc1bad-37df-4fc5-a1b5-a8a72b5b5431" 00:24:41.808 ], 00:24:41.808 "product_name": "NVMe disk", 00:24:41.808 "block_size": 4096, 00:24:41.808 "num_blocks": 1310720, 00:24:41.808 "uuid": "dcdc1bad-37df-4fc5-a1b5-a8a72b5b5431", 00:24:41.808 "numa_id": -1, 00:24:41.808 "assigned_rate_limits": { 00:24:41.808 "rw_ios_per_sec": 0, 00:24:41.808 "rw_mbytes_per_sec": 0, 00:24:41.808 "r_mbytes_per_sec": 0, 00:24:41.808 "w_mbytes_per_sec": 0 00:24:41.808 }, 00:24:41.808 "claimed": true, 00:24:41.808 "claim_type": "read_many_write_one", 00:24:41.808 "zoned": false, 00:24:41.808 "supported_io_types": { 00:24:41.808 "read": true, 00:24:41.808 "write": true, 00:24:41.808 "unmap": true, 00:24:41.808 "flush": true, 00:24:41.808 "reset": true, 00:24:41.808 "nvme_admin": true, 00:24:41.808 "nvme_io": true, 00:24:41.808 "nvme_io_md": false, 00:24:41.808 "write_zeroes": true, 00:24:41.808 "zcopy": false, 00:24:41.808 "get_zone_info": false, 00:24:41.808 "zone_management": false, 00:24:41.808 "zone_append": false, 00:24:41.808 "compare": true, 00:24:41.808 "compare_and_write": false, 00:24:41.808 "abort": true, 00:24:41.808 "seek_hole": false, 00:24:41.808 "seek_data": false, 00:24:41.808 "copy": true, 00:24:41.808 "nvme_iov_md": false 00:24:41.808 }, 00:24:41.808 "driver_specific": { 00:24:41.808 "nvme": [ 00:24:41.808 { 00:24:41.808 "pci_address": "0000:00:11.0", 00:24:41.808 "trid": { 00:24:41.808 "trtype": "PCIe", 00:24:41.808 "traddr": "0000:00:11.0" 00:24:41.808 }, 00:24:41.808 "ctrlr_data": { 00:24:41.808 "cntlid": 0, 00:24:41.808 "vendor_id": "0x1b36", 00:24:41.808 "model_number": "QEMU NVMe Ctrl", 00:24:41.808 "serial_number": "12341", 00:24:41.808 "firmware_revision": "8.0.0", 00:24:41.808 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:41.808 "oacs": { 00:24:41.808 "security": 0, 00:24:41.808 "format": 1, 00:24:41.808 "firmware": 0, 00:24:41.808 "ns_manage": 1 00:24:41.808 }, 00:24:41.808 "multi_ctrlr": false, 00:24:41.808 "ana_reporting": false 00:24:41.808 }, 00:24:41.808 "vs": { 00:24:41.808 "nvme_version": "1.4" 00:24:41.808 }, 00:24:41.808 "ns_data": { 00:24:41.808 "id": 1, 00:24:41.808 "can_share": false 00:24:41.808 } 00:24:41.808 } 00:24:41.808 ], 00:24:41.808 "mp_policy": "active_passive" 00:24:41.808 } 00:24:41.808 } 00:24:41.808 ]' 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:41.808 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:42.066 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=5b54a49f-a520-49f7-bdab-a730baef84b6 00:24:42.066 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:42.066 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5b54a49f-a520-49f7-bdab-a730baef84b6 00:24:42.325 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:42.325 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0f31e923-2d4c-426a-9d7b-44b13680a22d 00:24:42.325 13:37:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0f31e923-2d4c-426a-9d7b-44b13680a22d 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:42.583 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:42.842 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:42.842 { 00:24:42.842 "name": "341c1d4c-3cae-49cf-8ab2-76c7f7ea428b", 00:24:42.842 "aliases": [ 00:24:42.842 "lvs/nvme0n1p0" 00:24:42.842 ], 00:24:42.842 "product_name": "Logical Volume", 00:24:42.842 "block_size": 4096, 00:24:42.842 "num_blocks": 26476544, 00:24:42.842 "uuid": "341c1d4c-3cae-49cf-8ab2-76c7f7ea428b", 00:24:42.842 "assigned_rate_limits": { 00:24:42.842 "rw_ios_per_sec": 0, 00:24:42.842 "rw_mbytes_per_sec": 0, 00:24:42.842 "r_mbytes_per_sec": 0, 00:24:42.842 "w_mbytes_per_sec": 0 00:24:42.842 }, 00:24:42.842 "claimed": false, 00:24:42.842 "zoned": false, 00:24:42.842 "supported_io_types": { 00:24:42.842 "read": true, 00:24:42.842 "write": true, 00:24:42.842 "unmap": true, 00:24:42.842 "flush": false, 00:24:42.842 "reset": true, 00:24:42.842 "nvme_admin": false, 00:24:42.842 "nvme_io": false, 00:24:42.842 "nvme_io_md": false, 00:24:42.842 "write_zeroes": true, 00:24:42.842 "zcopy": false, 00:24:42.842 "get_zone_info": false, 00:24:42.842 "zone_management": false, 00:24:42.842 "zone_append": false, 00:24:42.842 "compare": false, 00:24:42.842 "compare_and_write": false, 00:24:42.842 "abort": false, 00:24:42.842 "seek_hole": true, 00:24:42.842 "seek_data": true, 00:24:42.842 "copy": false, 00:24:42.842 "nvme_iov_md": false 00:24:42.842 }, 00:24:42.842 "driver_specific": { 00:24:42.842 "lvol": { 00:24:42.842 "lvol_store_uuid": "0f31e923-2d4c-426a-9d7b-44b13680a22d", 00:24:42.842 "base_bdev": "nvme0n1", 00:24:42.842 "thin_provision": true, 00:24:42.842 "num_allocated_clusters": 0, 00:24:42.842 "snapshot": false, 00:24:42.842 "clone": false, 00:24:42.842 "esnap_clone": false 00:24:42.842 } 00:24:42.842 } 00:24:42.842 } 00:24:42.842 ]' 00:24:42.842 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:42.842 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:42.842 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:42.842 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:42.842 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:42.842 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:42.842 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:42.842 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:42.842 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:43.101 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:43.101 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:43.101 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:43.101 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:43.101 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:43.101 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:43.101 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:43.101 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:43.359 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:43.359 { 00:24:43.359 "name": "341c1d4c-3cae-49cf-8ab2-76c7f7ea428b", 00:24:43.359 "aliases": [ 00:24:43.359 "lvs/nvme0n1p0" 00:24:43.359 ], 00:24:43.359 "product_name": "Logical Volume", 00:24:43.359 "block_size": 4096, 00:24:43.359 "num_blocks": 26476544, 00:24:43.359 "uuid": "341c1d4c-3cae-49cf-8ab2-76c7f7ea428b", 00:24:43.359 "assigned_rate_limits": { 00:24:43.359 "rw_ios_per_sec": 0, 00:24:43.359 "rw_mbytes_per_sec": 0, 00:24:43.359 "r_mbytes_per_sec": 0, 00:24:43.359 "w_mbytes_per_sec": 0 00:24:43.359 }, 00:24:43.359 "claimed": false, 00:24:43.359 "zoned": false, 00:24:43.359 "supported_io_types": { 00:24:43.359 "read": true, 00:24:43.359 "write": true, 00:24:43.359 "unmap": true, 00:24:43.359 "flush": false, 00:24:43.359 "reset": true, 00:24:43.359 "nvme_admin": false, 00:24:43.359 "nvme_io": false, 00:24:43.359 "nvme_io_md": false, 00:24:43.359 "write_zeroes": true, 00:24:43.359 "zcopy": false, 00:24:43.359 "get_zone_info": false, 00:24:43.359 "zone_management": false, 00:24:43.359 "zone_append": false, 00:24:43.359 "compare": false, 00:24:43.359 "compare_and_write": false, 00:24:43.359 "abort": false, 00:24:43.359 "seek_hole": true, 00:24:43.359 "seek_data": true, 00:24:43.359 "copy": false, 00:24:43.359 "nvme_iov_md": false 00:24:43.359 }, 00:24:43.359 "driver_specific": { 00:24:43.359 "lvol": { 00:24:43.359 "lvol_store_uuid": "0f31e923-2d4c-426a-9d7b-44b13680a22d", 00:24:43.359 "base_bdev": "nvme0n1", 00:24:43.359 "thin_provision": true, 00:24:43.359 "num_allocated_clusters": 0, 00:24:43.360 "snapshot": false, 00:24:43.360 "clone": false, 00:24:43.360 "esnap_clone": false 00:24:43.360 } 00:24:43.360 } 00:24:43.360 } 00:24:43.360 ]' 00:24:43.360 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:43.360 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:43.360 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:43.360 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:43.360 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:43.360 13:37:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:43.360 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:43.360 13:37:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:43.618 13:37:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:43.618 13:37:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:43.618 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:43.618 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:43.618 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:43.618 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:43.618 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 341c1d4c-3cae-49cf-8ab2-76c7f7ea428b 00:24:43.876 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:43.876 { 00:24:43.876 "name": "341c1d4c-3cae-49cf-8ab2-76c7f7ea428b", 00:24:43.876 "aliases": [ 00:24:43.876 "lvs/nvme0n1p0" 00:24:43.876 ], 00:24:43.876 "product_name": "Logical Volume", 00:24:43.877 "block_size": 4096, 00:24:43.877 "num_blocks": 26476544, 00:24:43.877 "uuid": "341c1d4c-3cae-49cf-8ab2-76c7f7ea428b", 00:24:43.877 "assigned_rate_limits": { 00:24:43.877 "rw_ios_per_sec": 0, 00:24:43.877 "rw_mbytes_per_sec": 0, 00:24:43.877 "r_mbytes_per_sec": 0, 00:24:43.877 "w_mbytes_per_sec": 0 00:24:43.877 }, 00:24:43.877 "claimed": false, 00:24:43.877 "zoned": false, 00:24:43.877 "supported_io_types": { 00:24:43.877 "read": true, 00:24:43.877 "write": true, 00:24:43.877 "unmap": true, 00:24:43.877 "flush": false, 00:24:43.877 "reset": true, 00:24:43.877 "nvme_admin": false, 00:24:43.877 "nvme_io": false, 00:24:43.877 "nvme_io_md": false, 00:24:43.877 "write_zeroes": true, 00:24:43.877 "zcopy": false, 00:24:43.877 "get_zone_info": false, 00:24:43.877 "zone_management": false, 00:24:43.877 "zone_append": false, 00:24:43.877 "compare": false, 00:24:43.877 "compare_and_write": false, 00:24:43.877 "abort": false, 00:24:43.877 "seek_hole": true, 00:24:43.877 "seek_data": true, 00:24:43.877 "copy": false, 00:24:43.877 "nvme_iov_md": false 00:24:43.877 }, 00:24:43.877 "driver_specific": { 00:24:43.877 "lvol": { 00:24:43.877 "lvol_store_uuid": "0f31e923-2d4c-426a-9d7b-44b13680a22d", 00:24:43.877 "base_bdev": "nvme0n1", 00:24:43.877 "thin_provision": true, 00:24:43.877 "num_allocated_clusters": 0, 00:24:43.877 "snapshot": false, 00:24:43.877 "clone": false, 00:24:43.877 "esnap_clone": false 00:24:43.877 } 00:24:43.877 } 00:24:43.877 } 00:24:43.877 ]' 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 341c1d4c-3cae-49cf-8ab2-76c7f7ea428b --l2p_dram_limit 10' 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:43.877 13:37:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 341c1d4c-3cae-49cf-8ab2-76c7f7ea428b --l2p_dram_limit 10 -c nvc0n1p0 00:24:44.137 [2024-11-26 13:37:32.486779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.486956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:44.137 [2024-11-26 13:37:32.486976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:44.137 [2024-11-26 13:37:32.486983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.487036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.487044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:44.137 [2024-11-26 13:37:32.487052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:44.137 [2024-11-26 13:37:32.487058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.487079] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:44.137 [2024-11-26 13:37:32.487706] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:44.137 [2024-11-26 13:37:32.487723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.487729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:44.137 [2024-11-26 13:37:32.487737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:24:44.137 [2024-11-26 13:37:32.487743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.487797] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3e55ade6-60d3-44a5-9469-fbf187dae141 00:24:44.137 [2024-11-26 13:37:32.488751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.488773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:44.137 [2024-11-26 13:37:32.488780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:44.137 [2024-11-26 13:37:32.488790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.493576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.493684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:44.137 [2024-11-26 13:37:32.493696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.748 ms 00:24:44.137 [2024-11-26 13:37:32.493703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.493771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.493780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:44.137 [2024-11-26 13:37:32.493787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:44.137 [2024-11-26 13:37:32.493796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.493830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.493838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:44.137 [2024-11-26 13:37:32.493846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:44.137 [2024-11-26 13:37:32.493853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.493869] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:44.137 [2024-11-26 13:37:32.496752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.496848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:44.137 [2024-11-26 13:37:32.496863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.886 ms 00:24:44.137 [2024-11-26 13:37:32.496870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.496898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.496905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:44.137 [2024-11-26 13:37:32.496912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:44.137 [2024-11-26 13:37:32.496918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.496932] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:44.137 [2024-11-26 13:37:32.497035] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:44.137 [2024-11-26 13:37:32.497047] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:44.137 [2024-11-26 13:37:32.497055] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:44.137 [2024-11-26 13:37:32.497064] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:44.137 [2024-11-26 13:37:32.497071] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:44.137 [2024-11-26 13:37:32.497079] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:44.137 [2024-11-26 13:37:32.497084] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:44.137 [2024-11-26 13:37:32.497093] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:44.137 [2024-11-26 13:37:32.497098] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:44.137 [2024-11-26 13:37:32.497105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.497115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:44.137 [2024-11-26 13:37:32.497122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:24:44.137 [2024-11-26 13:37:32.497127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.497193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.137 [2024-11-26 13:37:32.497199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:44.137 [2024-11-26 13:37:32.497206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:44.137 [2024-11-26 13:37:32.497211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.137 [2024-11-26 13:37:32.497291] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:44.137 [2024-11-26 13:37:32.497297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:44.137 [2024-11-26 13:37:32.497305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:44.137 [2024-11-26 13:37:32.497311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.137 [2024-11-26 13:37:32.497318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:44.137 [2024-11-26 13:37:32.497323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:44.137 [2024-11-26 13:37:32.497329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:44.137 [2024-11-26 13:37:32.497334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:44.137 [2024-11-26 13:37:32.497341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:44.137 [2024-11-26 13:37:32.497346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:44.137 [2024-11-26 13:37:32.497352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:44.137 [2024-11-26 13:37:32.497357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:44.137 [2024-11-26 13:37:32.497365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:44.137 [2024-11-26 13:37:32.497370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:44.137 [2024-11-26 13:37:32.497378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:44.137 [2024-11-26 13:37:32.497383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.137 [2024-11-26 13:37:32.497390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:44.137 [2024-11-26 13:37:32.497395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:44.137 [2024-11-26 13:37:32.497402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.137 [2024-11-26 13:37:32.497408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:44.137 [2024-11-26 13:37:32.497414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:44.137 [2024-11-26 13:37:32.497419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.138 [2024-11-26 13:37:32.497425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:44.138 [2024-11-26 13:37:32.497430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:44.138 [2024-11-26 13:37:32.497436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.138 [2024-11-26 13:37:32.497461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:44.138 [2024-11-26 13:37:32.497469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:44.138 [2024-11-26 13:37:32.497473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.138 [2024-11-26 13:37:32.497480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:44.138 [2024-11-26 13:37:32.497484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:44.138 [2024-11-26 13:37:32.497491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:44.138 [2024-11-26 13:37:32.497496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:44.138 [2024-11-26 13:37:32.497504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:44.138 [2024-11-26 13:37:32.497509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:44.138 [2024-11-26 13:37:32.497515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:44.138 [2024-11-26 13:37:32.497520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:44.138 [2024-11-26 13:37:32.497526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:44.138 [2024-11-26 13:37:32.497531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:44.138 [2024-11-26 13:37:32.497538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:44.138 [2024-11-26 13:37:32.497543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.138 [2024-11-26 13:37:32.497549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:44.138 [2024-11-26 13:37:32.497554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:44.138 [2024-11-26 13:37:32.497560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.138 [2024-11-26 13:37:32.497564] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:44.138 [2024-11-26 13:37:32.497571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:44.138 [2024-11-26 13:37:32.497577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:44.138 [2024-11-26 13:37:32.497586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:44.138 [2024-11-26 13:37:32.497593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:44.138 [2024-11-26 13:37:32.497600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:44.138 [2024-11-26 13:37:32.497605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:44.138 [2024-11-26 13:37:32.497611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:44.138 [2024-11-26 13:37:32.497616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:44.138 [2024-11-26 13:37:32.497623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:44.138 [2024-11-26 13:37:32.497630] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:44.138 [2024-11-26 13:37:32.497644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.138 [2024-11-26 13:37:32.497651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:44.138 [2024-11-26 13:37:32.497658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:44.138 [2024-11-26 13:37:32.497664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:44.138 [2024-11-26 13:37:32.497670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:44.138 [2024-11-26 13:37:32.497676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:44.138 [2024-11-26 13:37:32.497682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:44.138 [2024-11-26 13:37:32.497687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:44.138 [2024-11-26 13:37:32.497694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:44.138 [2024-11-26 13:37:32.497699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:44.138 [2024-11-26 13:37:32.497707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:44.138 [2024-11-26 13:37:32.497712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:44.138 [2024-11-26 13:37:32.497718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:44.138 [2024-11-26 13:37:32.497724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:44.138 [2024-11-26 13:37:32.497732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:44.138 [2024-11-26 13:37:32.497740] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:44.138 [2024-11-26 13:37:32.497748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:44.138 [2024-11-26 13:37:32.497755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:44.138 [2024-11-26 13:37:32.497762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:44.138 [2024-11-26 13:37:32.497767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:44.138 [2024-11-26 13:37:32.497774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:44.138 [2024-11-26 13:37:32.497780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.138 [2024-11-26 13:37:32.497787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:44.138 [2024-11-26 13:37:32.497793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:24:44.138 [2024-11-26 13:37:32.497800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.138 [2024-11-26 13:37:32.497842] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:44.138 [2024-11-26 13:37:32.497857] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:46.670 [2024-11-26 13:37:34.647232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.647469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:46.670 [2024-11-26 13:37:34.647537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2149.381 ms 00:24:46.670 [2024-11-26 13:37:34.647565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.672649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.672827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.670 [2024-11-26 13:37:34.672894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.816 ms 00:24:46.670 [2024-11-26 13:37:34.672919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.673067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.673161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:46.670 [2024-11-26 13:37:34.673185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:46.670 [2024-11-26 13:37:34.673209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.703322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.703499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.670 [2024-11-26 13:37:34.703560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.060 ms 00:24:46.670 [2024-11-26 13:37:34.703588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.703638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.703662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.670 [2024-11-26 13:37:34.703682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:46.670 [2024-11-26 13:37:34.703752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.704119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.704221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.670 [2024-11-26 13:37:34.704276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:24:46.670 [2024-11-26 13:37:34.704300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.704468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.704555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.670 [2024-11-26 13:37:34.704605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:46.670 [2024-11-26 13:37:34.704630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.718315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.718428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.670 [2024-11-26 13:37:34.718498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.620 ms 00:24:46.670 [2024-11-26 13:37:34.718522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.729670] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:46.670 [2024-11-26 13:37:34.732341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.732436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:46.670 [2024-11-26 13:37:34.732499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.732 ms 00:24:46.670 [2024-11-26 13:37:34.732522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.810163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.810363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:46.670 [2024-11-26 13:37:34.810425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.596 ms 00:24:46.670 [2024-11-26 13:37:34.810477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.810680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.810717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:46.670 [2024-11-26 13:37:34.810807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:24:46.670 [2024-11-26 13:37:34.810829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.670 [2024-11-26 13:37:34.834448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.670 [2024-11-26 13:37:34.834594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:46.670 [2024-11-26 13:37:34.834650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.525 ms 00:24:46.670 [2024-11-26 13:37:34.834672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.671 [2024-11-26 13:37:34.857237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.671 [2024-11-26 13:37:34.857407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:46.671 [2024-11-26 13:37:34.857493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.470 ms 00:24:46.671 [2024-11-26 13:37:34.857514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.671 [2024-11-26 13:37:34.858116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.671 [2024-11-26 13:37:34.858194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:46.671 [2024-11-26 13:37:34.858243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:24:46.671 [2024-11-26 13:37:34.858267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.671 [2024-11-26 13:37:34.923652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.671 [2024-11-26 13:37:34.923833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:46.671 [2024-11-26 13:37:34.923891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.330 ms 00:24:46.671 [2024-11-26 13:37:34.923914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.671 [2024-11-26 13:37:34.948039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.671 [2024-11-26 13:37:34.948174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:46.671 [2024-11-26 13:37:34.948228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.036 ms 00:24:46.671 [2024-11-26 13:37:34.948251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.671 [2024-11-26 13:37:34.971331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.671 [2024-11-26 13:37:34.971378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:46.671 [2024-11-26 13:37:34.971392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.034 ms 00:24:46.671 [2024-11-26 13:37:34.971399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.671 [2024-11-26 13:37:34.994683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.671 [2024-11-26 13:37:34.994721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:46.671 [2024-11-26 13:37:34.994735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.247 ms 00:24:46.671 [2024-11-26 13:37:34.994743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.671 [2024-11-26 13:37:34.994782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.671 [2024-11-26 13:37:34.994791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:46.671 [2024-11-26 13:37:34.994803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:46.671 [2024-11-26 13:37:34.994810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.671 [2024-11-26 13:37:34.994887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.671 [2024-11-26 13:37:34.994898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:46.671 [2024-11-26 13:37:34.994907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:46.671 [2024-11-26 13:37:34.994915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.671 [2024-11-26 13:37:34.995756] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2508.533 ms, result 0 00:24:46.671 { 00:24:46.671 "name": "ftl0", 00:24:46.671 "uuid": "3e55ade6-60d3-44a5-9469-fbf187dae141" 00:24:46.671 } 00:24:46.671 13:37:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:46.671 13:37:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:46.671 13:37:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:46.671 13:37:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:46.671 13:37:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:46.929 /dev/nbd0 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:46.929 1+0 records in 00:24:46.929 1+0 records out 00:24:46.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291365 s, 14.1 MB/s 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:46.929 13:37:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:47.186 [2024-11-26 13:37:35.524184] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:24:47.186 [2024-11-26 13:37:35.524302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79885 ] 00:24:47.186 [2024-11-26 13:37:35.683498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.444 [2024-11-26 13:37:35.778928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.821  [2024-11-26T13:37:38.324Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-26T13:37:39.259Z] Copying: 393/1024 [MB] (197 MBps) [2024-11-26T13:37:40.191Z] Copying: 640/1024 [MB] (247 MBps) [2024-11-26T13:37:40.758Z] Copying: 891/1024 [MB] (250 MBps) [2024-11-26T13:37:41.324Z] Copying: 1024/1024 [MB] (average 226 MBps) 00:24:52.754 00:24:52.754 13:37:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:54.656 13:37:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:54.656 [2024-11-26 13:37:42.767838] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:24:54.656 [2024-11-26 13:37:42.767934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79968 ] 00:24:54.656 [2024-11-26 13:37:42.917420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.656 [2024-11-26 13:37:42.998417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.031  [2024-11-26T13:37:45.536Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-26T13:37:46.471Z] Copying: 58/1024 [MB] (28 MBps) [2024-11-26T13:37:47.406Z] Copying: 85/1024 [MB] (26 MBps) [2024-11-26T13:37:48.347Z] Copying: 113/1024 [MB] (28 MBps) [2024-11-26T13:37:49.286Z] Copying: 138/1024 [MB] (24 MBps) [2024-11-26T13:37:50.230Z] Copying: 165/1024 [MB] (27 MBps) [2024-11-26T13:37:51.174Z] Copying: 194/1024 [MB] (28 MBps) [2024-11-26T13:37:52.216Z] Copying: 221/1024 [MB] (27 MBps) [2024-11-26T13:37:53.607Z] Copying: 250/1024 [MB] (28 MBps) [2024-11-26T13:37:54.177Z] Copying: 275/1024 [MB] (25 MBps) [2024-11-26T13:37:55.558Z] Copying: 300/1024 [MB] (25 MBps) [2024-11-26T13:37:56.500Z] Copying: 323/1024 [MB] (22 MBps) [2024-11-26T13:37:57.445Z] Copying: 348/1024 [MB] (25 MBps) [2024-11-26T13:37:58.395Z] Copying: 372/1024 [MB] (23 MBps) [2024-11-26T13:37:59.349Z] Copying: 395/1024 [MB] (22 MBps) [2024-11-26T13:38:00.292Z] Copying: 417/1024 [MB] (21 MBps) [2024-11-26T13:38:01.235Z] Copying: 442/1024 [MB] (25 MBps) [2024-11-26T13:38:02.616Z] Copying: 467/1024 [MB] (24 MBps) [2024-11-26T13:38:03.189Z] Copying: 494/1024 [MB] (26 MBps) [2024-11-26T13:38:04.574Z] Copying: 523/1024 [MB] (28 MBps) [2024-11-26T13:38:05.515Z] Copying: 551/1024 [MB] (28 MBps) [2024-11-26T13:38:06.457Z] Copying: 580/1024 [MB] (29 MBps) [2024-11-26T13:38:07.400Z] Copying: 605/1024 [MB] (24 MBps) [2024-11-26T13:38:08.344Z] Copying: 632/1024 [MB] (26 MBps) [2024-11-26T13:38:09.287Z] Copying: 662/1024 [MB] (30 MBps) [2024-11-26T13:38:10.227Z] Copying: 693/1024 [MB] (30 MBps) [2024-11-26T13:38:11.613Z] Copying: 723/1024 [MB] (30 MBps) [2024-11-26T13:38:12.184Z] Copying: 759/1024 [MB] (35 MBps) [2024-11-26T13:38:13.558Z] Copying: 789/1024 [MB] (30 MBps) [2024-11-26T13:38:14.495Z] Copying: 819/1024 [MB] (30 MBps) [2024-11-26T13:38:15.456Z] Copying: 851/1024 [MB] (31 MBps) [2024-11-26T13:38:16.393Z] Copying: 885/1024 [MB] (33 MBps) [2024-11-26T13:38:17.423Z] Copying: 913/1024 [MB] (28 MBps) [2024-11-26T13:38:18.364Z] Copying: 944/1024 [MB] (30 MBps) [2024-11-26T13:38:19.308Z] Copying: 968/1024 [MB] (23 MBps) [2024-11-26T13:38:20.251Z] Copying: 991/1024 [MB] (23 MBps) [2024-11-26T13:38:20.512Z] Copying: 1016/1024 [MB] (24 MBps) [2024-11-26T13:38:21.122Z] Copying: 1024/1024 [MB] (average 27 MBps) 00:25:32.552 00:25:32.552 13:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:32.552 13:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:32.813 13:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:33.076 [2024-11-26 13:38:21.423669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.423723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:33.076 [2024-11-26 13:38:21.423737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:33.076 [2024-11-26 13:38:21.423747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.423772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:33.076 [2024-11-26 13:38:21.426375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.426405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:33.076 [2024-11-26 13:38:21.426417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:25:33.076 [2024-11-26 13:38:21.426425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.428234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.428266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:33.076 [2024-11-26 13:38:21.428277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.772 ms 00:25:33.076 [2024-11-26 13:38:21.428285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.444149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.444291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:33.076 [2024-11-26 13:38:21.444311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.842 ms 00:25:33.076 [2024-11-26 13:38:21.444319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.450496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.450601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:33.076 [2024-11-26 13:38:21.450620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.142 ms 00:25:33.076 [2024-11-26 13:38:21.450629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.475524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.475557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:33.076 [2024-11-26 13:38:21.475571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.822 ms 00:25:33.076 [2024-11-26 13:38:21.475579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.490916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.490952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:33.076 [2024-11-26 13:38:21.490970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.294 ms 00:25:33.076 [2024-11-26 13:38:21.490978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.491126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.491137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:33.076 [2024-11-26 13:38:21.491147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:33.076 [2024-11-26 13:38:21.491155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.514834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.514869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:33.076 [2024-11-26 13:38:21.514882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.658 ms 00:25:33.076 [2024-11-26 13:38:21.514890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.537996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.538145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:33.076 [2024-11-26 13:38:21.538166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.068 ms 00:25:33.076 [2024-11-26 13:38:21.538173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.560997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.561130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:33.076 [2024-11-26 13:38:21.561150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.786 ms 00:25:33.076 [2024-11-26 13:38:21.561158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.584077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.076 [2024-11-26 13:38:21.584219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:33.076 [2024-11-26 13:38:21.584239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.843 ms 00:25:33.076 [2024-11-26 13:38:21.584247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.076 [2024-11-26 13:38:21.584284] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:33.076 [2024-11-26 13:38:21.584299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:33.076 [2024-11-26 13:38:21.584311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:33.076 [2024-11-26 13:38:21.584319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:33.076 [2024-11-26 13:38:21.584328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.584997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:33.077 [2024-11-26 13:38:21.585121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:33.078 [2024-11-26 13:38:21.585129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:33.078 [2024-11-26 13:38:21.585138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:33.078 [2024-11-26 13:38:21.585146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:33.078 [2024-11-26 13:38:21.585157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:33.078 [2024-11-26 13:38:21.585165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:33.078 [2024-11-26 13:38:21.585174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:33.078 [2024-11-26 13:38:21.585189] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:33.078 [2024-11-26 13:38:21.585198] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e55ade6-60d3-44a5-9469-fbf187dae141 00:25:33.078 [2024-11-26 13:38:21.585207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:33.078 [2024-11-26 13:38:21.585230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:33.078 [2024-11-26 13:38:21.585237] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:33.078 [2024-11-26 13:38:21.585249] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:33.078 [2024-11-26 13:38:21.585256] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:33.078 [2024-11-26 13:38:21.585265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:33.078 [2024-11-26 13:38:21.585272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:33.078 [2024-11-26 13:38:21.585280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:33.078 [2024-11-26 13:38:21.585287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:33.078 [2024-11-26 13:38:21.585296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.078 [2024-11-26 13:38:21.585303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:33.078 [2024-11-26 13:38:21.585313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:25:33.078 [2024-11-26 13:38:21.585319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.078 [2024-11-26 13:38:21.597743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.078 [2024-11-26 13:38:21.597781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:33.078 [2024-11-26 13:38:21.597793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.387 ms 00:25:33.078 [2024-11-26 13:38:21.597800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.078 [2024-11-26 13:38:21.598164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.078 [2024-11-26 13:38:21.598178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:33.078 [2024-11-26 13:38:21.598188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:25:33.078 [2024-11-26 13:38:21.598195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.078 [2024-11-26 13:38:21.639915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.078 [2024-11-26 13:38:21.639957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.078 [2024-11-26 13:38:21.639970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.078 [2024-11-26 13:38:21.639978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.078 [2024-11-26 13:38:21.640056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.078 [2024-11-26 13:38:21.640064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.078 [2024-11-26 13:38:21.640074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.078 [2024-11-26 13:38:21.640081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.078 [2024-11-26 13:38:21.640199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.078 [2024-11-26 13:38:21.640212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.078 [2024-11-26 13:38:21.640222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.078 [2024-11-26 13:38:21.640229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.078 [2024-11-26 13:38:21.640250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.078 [2024-11-26 13:38:21.640257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.078 [2024-11-26 13:38:21.640267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.078 [2024-11-26 13:38:21.640274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.340 [2024-11-26 13:38:21.717383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.340 [2024-11-26 13:38:21.717430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.340 [2024-11-26 13:38:21.717459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.340 [2024-11-26 13:38:21.717468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.340 [2024-11-26 13:38:21.780102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.340 [2024-11-26 13:38:21.780152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.340 [2024-11-26 13:38:21.780165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.340 [2024-11-26 13:38:21.780173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.340 [2024-11-26 13:38:21.780250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.340 [2024-11-26 13:38:21.780260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:33.340 [2024-11-26 13:38:21.780273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.340 [2024-11-26 13:38:21.780280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.340 [2024-11-26 13:38:21.780346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.340 [2024-11-26 13:38:21.780356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:33.340 [2024-11-26 13:38:21.780366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.340 [2024-11-26 13:38:21.780373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.340 [2024-11-26 13:38:21.780480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.340 [2024-11-26 13:38:21.780490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:33.340 [2024-11-26 13:38:21.780499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.340 [2024-11-26 13:38:21.780509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.340 [2024-11-26 13:38:21.780543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.340 [2024-11-26 13:38:21.780551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:33.340 [2024-11-26 13:38:21.780561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.340 [2024-11-26 13:38:21.780568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.340 [2024-11-26 13:38:21.780605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.340 [2024-11-26 13:38:21.780614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:33.340 [2024-11-26 13:38:21.780623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.340 [2024-11-26 13:38:21.780633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.340 [2024-11-26 13:38:21.780675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.340 [2024-11-26 13:38:21.780684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:33.340 [2024-11-26 13:38:21.780694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.340 [2024-11-26 13:38:21.780701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.340 [2024-11-26 13:38:21.780826] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 357.123 ms, result 0 00:25:33.340 true 00:25:33.340 13:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79760 00:25:33.340 13:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79760 00:25:33.340 13:38:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:33.340 [2024-11-26 13:38:21.879289] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:25:33.340 [2024-11-26 13:38:21.879419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80378 ] 00:25:33.601 [2024-11-26 13:38:22.042192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.601 [2024-11-26 13:38:22.150205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.989  [2024-11-26T13:38:24.505Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-26T13:38:25.447Z] Copying: 381/1024 [MB] (191 MBps) [2024-11-26T13:38:26.390Z] Copying: 588/1024 [MB] (207 MBps) [2024-11-26T13:38:27.327Z] Copying: 840/1024 [MB] (251 MBps) [2024-11-26T13:38:27.893Z] Copying: 1024/1024 [MB] (average 215 MBps) 00:25:39.323 00:25:39.323 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79760 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:39.323 13:38:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:39.323 [2024-11-26 13:38:27.793063] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:25:39.323 [2024-11-26 13:38:27.793185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80442 ] 00:25:39.581 [2024-11-26 13:38:27.953595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.581 [2024-11-26 13:38:28.052813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.839 [2024-11-26 13:38:28.306295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:39.839 [2024-11-26 13:38:28.306360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:39.839 [2024-11-26 13:38:28.370187] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:39.839 [2024-11-26 13:38:28.370427] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:39.839 [2024-11-26 13:38:28.370720] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:40.407 [2024-11-26 13:38:28.754531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.754581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:40.407 [2024-11-26 13:38:28.754595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:40.407 [2024-11-26 13:38:28.754606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.754653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.754663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:40.407 [2024-11-26 13:38:28.754673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:40.407 [2024-11-26 13:38:28.754681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.754700] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:40.407 [2024-11-26 13:38:28.755339] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:40.407 [2024-11-26 13:38:28.755361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.755369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:40.407 [2024-11-26 13:38:28.755377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:25:40.407 [2024-11-26 13:38:28.755384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.756461] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:40.407 [2024-11-26 13:38:28.768562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.768596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:40.407 [2024-11-26 13:38:28.768609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.103 ms 00:25:40.407 [2024-11-26 13:38:28.768616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.768664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.768673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:40.407 [2024-11-26 13:38:28.768682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:40.407 [2024-11-26 13:38:28.768688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.773593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.773624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:40.407 [2024-11-26 13:38:28.773634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.855 ms 00:25:40.407 [2024-11-26 13:38:28.773641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.773707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.773716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:40.407 [2024-11-26 13:38:28.773724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:40.407 [2024-11-26 13:38:28.773730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.773779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.773790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:40.407 [2024-11-26 13:38:28.773798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:40.407 [2024-11-26 13:38:28.773804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.773826] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:40.407 [2024-11-26 13:38:28.777131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.777160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:40.407 [2024-11-26 13:38:28.777169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.311 ms 00:25:40.407 [2024-11-26 13:38:28.777176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.777203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.777211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:40.407 [2024-11-26 13:38:28.777219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:40.407 [2024-11-26 13:38:28.777226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.777247] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:40.407 [2024-11-26 13:38:28.777264] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:40.407 [2024-11-26 13:38:28.777297] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:40.407 [2024-11-26 13:38:28.777313] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:40.407 [2024-11-26 13:38:28.777413] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:40.407 [2024-11-26 13:38:28.777429] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:40.407 [2024-11-26 13:38:28.777450] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:40.407 [2024-11-26 13:38:28.777463] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:40.407 [2024-11-26 13:38:28.777471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:40.407 [2024-11-26 13:38:28.777479] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:40.407 [2024-11-26 13:38:28.777487] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:40.407 [2024-11-26 13:38:28.777494] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:40.407 [2024-11-26 13:38:28.777501] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:40.407 [2024-11-26 13:38:28.777508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.777514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:40.407 [2024-11-26 13:38:28.777521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:25:40.407 [2024-11-26 13:38:28.777528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.777609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.407 [2024-11-26 13:38:28.777625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:40.407 [2024-11-26 13:38:28.777632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:40.407 [2024-11-26 13:38:28.777639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.407 [2024-11-26 13:38:28.777738] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:40.407 [2024-11-26 13:38:28.777749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:40.407 [2024-11-26 13:38:28.777756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:40.408 [2024-11-26 13:38:28.777764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.408 [2024-11-26 13:38:28.777771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:40.408 [2024-11-26 13:38:28.777778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:40.408 [2024-11-26 13:38:28.777785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:40.408 [2024-11-26 13:38:28.777791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:40.408 [2024-11-26 13:38:28.777798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:40.408 [2024-11-26 13:38:28.777809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:40.408 [2024-11-26 13:38:28.777816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:40.408 [2024-11-26 13:38:28.777822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:40.408 [2024-11-26 13:38:28.777829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:40.408 [2024-11-26 13:38:28.777835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:40.408 [2024-11-26 13:38:28.777841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:40.408 [2024-11-26 13:38:28.777848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.408 [2024-11-26 13:38:28.777855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:40.408 [2024-11-26 13:38:28.777862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:40.408 [2024-11-26 13:38:28.777868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.408 [2024-11-26 13:38:28.777875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:40.408 [2024-11-26 13:38:28.777882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:40.408 [2024-11-26 13:38:28.777888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.408 [2024-11-26 13:38:28.777894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:40.408 [2024-11-26 13:38:28.777900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:40.408 [2024-11-26 13:38:28.777906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.408 [2024-11-26 13:38:28.777912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:40.408 [2024-11-26 13:38:28.777918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:40.408 [2024-11-26 13:38:28.777924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.408 [2024-11-26 13:38:28.777930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:40.408 [2024-11-26 13:38:28.777937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:40.408 [2024-11-26 13:38:28.777943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:40.408 [2024-11-26 13:38:28.777949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:40.408 [2024-11-26 13:38:28.777955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:40.408 [2024-11-26 13:38:28.777961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:40.408 [2024-11-26 13:38:28.777967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:40.408 [2024-11-26 13:38:28.777974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:40.408 [2024-11-26 13:38:28.777980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:40.408 [2024-11-26 13:38:28.777987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:40.408 [2024-11-26 13:38:28.777994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:40.408 [2024-11-26 13:38:28.778001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.408 [2024-11-26 13:38:28.778007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:40.408 [2024-11-26 13:38:28.778013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:40.408 [2024-11-26 13:38:28.778019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.408 [2024-11-26 13:38:28.778026] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:40.408 [2024-11-26 13:38:28.778033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:40.408 [2024-11-26 13:38:28.778042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:40.408 [2024-11-26 13:38:28.778049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:40.408 [2024-11-26 13:38:28.778056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:40.408 [2024-11-26 13:38:28.778064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:40.408 [2024-11-26 13:38:28.778070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:40.408 [2024-11-26 13:38:28.778077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:40.408 [2024-11-26 13:38:28.778083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:40.408 [2024-11-26 13:38:28.778090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:40.408 [2024-11-26 13:38:28.778097] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:40.408 [2024-11-26 13:38:28.778106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:40.408 [2024-11-26 13:38:28.778114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:40.408 [2024-11-26 13:38:28.778121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:40.408 [2024-11-26 13:38:28.778128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:40.408 [2024-11-26 13:38:28.778134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:40.408 [2024-11-26 13:38:28.778141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:40.408 [2024-11-26 13:38:28.778148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:40.408 [2024-11-26 13:38:28.778155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:40.408 [2024-11-26 13:38:28.778162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:40.408 [2024-11-26 13:38:28.778169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:40.408 [2024-11-26 13:38:28.778175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:40.408 [2024-11-26 13:38:28.778182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:40.408 [2024-11-26 13:38:28.778189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:40.408 [2024-11-26 13:38:28.778195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:40.408 [2024-11-26 13:38:28.778203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:40.408 [2024-11-26 13:38:28.778210] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:40.408 [2024-11-26 13:38:28.778218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:40.408 [2024-11-26 13:38:28.778225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:40.408 [2024-11-26 13:38:28.778232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:40.408 [2024-11-26 13:38:28.778239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:40.408 [2024-11-26 13:38:28.778246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:40.408 [2024-11-26 13:38:28.778253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.408 [2024-11-26 13:38:28.778261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:40.408 [2024-11-26 13:38:28.778267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:25:40.408 [2024-11-26 13:38:28.778274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.408 [2024-11-26 13:38:28.804332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.408 [2024-11-26 13:38:28.804370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:40.408 [2024-11-26 13:38:28.804382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.005 ms 00:25:40.408 [2024-11-26 13:38:28.804389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.408 [2024-11-26 13:38:28.804494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.408 [2024-11-26 13:38:28.804503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:40.408 [2024-11-26 13:38:28.804511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:40.408 [2024-11-26 13:38:28.804518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.408 [2024-11-26 13:38:28.850772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.408 [2024-11-26 13:38:28.850820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:40.408 [2024-11-26 13:38:28.850835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.194 ms 00:25:40.408 [2024-11-26 13:38:28.850844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.408 [2024-11-26 13:38:28.850899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.408 [2024-11-26 13:38:28.850909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:40.408 [2024-11-26 13:38:28.850918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:40.408 [2024-11-26 13:38:28.850926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.408 [2024-11-26 13:38:28.851307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.408 [2024-11-26 13:38:28.851334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:40.408 [2024-11-26 13:38:28.851343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:25:40.408 [2024-11-26 13:38:28.851357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.408 [2024-11-26 13:38:28.851508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.408 [2024-11-26 13:38:28.851520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:40.408 [2024-11-26 13:38:28.851528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:25:40.408 [2024-11-26 13:38:28.851535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.408 [2024-11-26 13:38:28.864587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.409 [2024-11-26 13:38:28.864620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:40.409 [2024-11-26 13:38:28.864630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.033 ms 00:25:40.409 [2024-11-26 13:38:28.864638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.409 [2024-11-26 13:38:28.876707] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:40.409 [2024-11-26 13:38:28.876742] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:40.409 [2024-11-26 13:38:28.876753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.409 [2024-11-26 13:38:28.876761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:40.409 [2024-11-26 13:38:28.876770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.022 ms 00:25:40.409 [2024-11-26 13:38:28.876776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.409 [2024-11-26 13:38:28.900907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.409 [2024-11-26 13:38:28.900942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:40.409 [2024-11-26 13:38:28.900953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.095 ms 00:25:40.409 [2024-11-26 13:38:28.900961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.409 [2024-11-26 13:38:28.912518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.409 [2024-11-26 13:38:28.912550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:40.409 [2024-11-26 13:38:28.912559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.524 ms 00:25:40.409 [2024-11-26 13:38:28.912567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.409 [2024-11-26 13:38:28.923720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.409 [2024-11-26 13:38:28.923752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:40.409 [2024-11-26 13:38:28.923762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.121 ms 00:25:40.409 [2024-11-26 13:38:28.923769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.409 [2024-11-26 13:38:28.924386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.409 [2024-11-26 13:38:28.924411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:40.409 [2024-11-26 13:38:28.924420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:25:40.409 [2024-11-26 13:38:28.924428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.667 [2024-11-26 13:38:28.980148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.667 [2024-11-26 13:38:28.980205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:40.667 [2024-11-26 13:38:28.980219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.692 ms 00:25:40.667 [2024-11-26 13:38:28.980227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.667 [2024-11-26 13:38:28.990559] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:40.667 [2024-11-26 13:38:28.992963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.667 [2024-11-26 13:38:28.992996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:40.667 [2024-11-26 13:38:28.993007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.679 ms 00:25:40.667 [2024-11-26 13:38:28.993020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.667 [2024-11-26 13:38:28.993115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.667 [2024-11-26 13:38:28.993126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:40.667 [2024-11-26 13:38:28.993134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:40.667 [2024-11-26 13:38:28.993141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.667 [2024-11-26 13:38:28.993206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.667 [2024-11-26 13:38:28.993216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:40.667 [2024-11-26 13:38:28.993224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:40.667 [2024-11-26 13:38:28.993231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.667 [2024-11-26 13:38:28.993253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.667 [2024-11-26 13:38:28.993261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:40.667 [2024-11-26 13:38:28.993269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:40.667 [2024-11-26 13:38:28.993276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.667 [2024-11-26 13:38:28.993305] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:40.667 [2024-11-26 13:38:28.993314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.667 [2024-11-26 13:38:28.993323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:40.667 [2024-11-26 13:38:28.993330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:40.667 [2024-11-26 13:38:28.993340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.667 [2024-11-26 13:38:29.015825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.667 [2024-11-26 13:38:29.015877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:40.667 [2024-11-26 13:38:29.015890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.467 ms 00:25:40.667 [2024-11-26 13:38:29.015898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.667 [2024-11-26 13:38:29.015987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.667 [2024-11-26 13:38:29.015997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:40.667 [2024-11-26 13:38:29.016006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:40.667 [2024-11-26 13:38:29.016013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.667 [2024-11-26 13:38:29.017129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 262.180 ms, result 0 00:25:41.599  [2024-11-26T13:38:31.100Z] Copying: 42/1024 [MB] (42 MBps) [2024-11-26T13:38:32.041Z] Copying: 88/1024 [MB] (45 MBps) [2024-11-26T13:38:33.415Z] Copying: 124/1024 [MB] (36 MBps) [2024-11-26T13:38:34.349Z] Copying: 168/1024 [MB] (43 MBps) [2024-11-26T13:38:35.284Z] Copying: 212/1024 [MB] (43 MBps) [2024-11-26T13:38:36.263Z] Copying: 256/1024 [MB] (43 MBps) [2024-11-26T13:38:37.203Z] Copying: 294/1024 [MB] (37 MBps) [2024-11-26T13:38:38.147Z] Copying: 330/1024 [MB] (36 MBps) [2024-11-26T13:38:39.091Z] Copying: 358/1024 [MB] (28 MBps) [2024-11-26T13:38:40.474Z] Copying: 386/1024 [MB] (27 MBps) [2024-11-26T13:38:41.046Z] Copying: 419/1024 [MB] (33 MBps) [2024-11-26T13:38:42.436Z] Copying: 452/1024 [MB] (32 MBps) [2024-11-26T13:38:43.130Z] Copying: 482/1024 [MB] (30 MBps) [2024-11-26T13:38:44.089Z] Copying: 518/1024 [MB] (35 MBps) [2024-11-26T13:38:45.472Z] Copying: 547/1024 [MB] (29 MBps) [2024-11-26T13:38:46.046Z] Copying: 578/1024 [MB] (31 MBps) [2024-11-26T13:38:47.428Z] Copying: 603/1024 [MB] (24 MBps) [2024-11-26T13:38:48.370Z] Copying: 633/1024 [MB] (30 MBps) [2024-11-26T13:38:49.303Z] Copying: 662/1024 [MB] (28 MBps) [2024-11-26T13:38:50.234Z] Copying: 707/1024 [MB] (44 MBps) [2024-11-26T13:38:51.168Z] Copying: 751/1024 [MB] (44 MBps) [2024-11-26T13:38:52.103Z] Copying: 797/1024 [MB] (46 MBps) [2024-11-26T13:38:53.037Z] Copying: 840/1024 [MB] (43 MBps) [2024-11-26T13:38:54.419Z] Copying: 882/1024 [MB] (41 MBps) [2024-11-26T13:38:55.358Z] Copying: 925/1024 [MB] (42 MBps) [2024-11-26T13:38:56.349Z] Copying: 962/1024 [MB] (37 MBps) [2024-11-26T13:38:57.282Z] Copying: 1006/1024 [MB] (43 MBps) [2024-11-26T13:38:57.540Z] Copying: 1023/1024 [MB] (17 MBps) [2024-11-26T13:38:57.540Z] Copying: 1024/1024 [MB] (average 35 MBps)[2024-11-26 13:38:57.512836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.970 [2024-11-26 13:38:57.512892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:08.970 [2024-11-26 13:38:57.512906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:08.970 [2024-11-26 13:38:57.512915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.970 [2024-11-26 13:38:57.514936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:08.970 [2024-11-26 13:38:57.520639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.970 [2024-11-26 13:38:57.520673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:08.970 [2024-11-26 13:38:57.520685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.665 ms 00:26:08.970 [2024-11-26 13:38:57.520698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.970 [2024-11-26 13:38:57.531682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.970 [2024-11-26 13:38:57.531716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:08.970 [2024-11-26 13:38:57.531727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.153 ms 00:26:08.970 [2024-11-26 13:38:57.531734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.230 [2024-11-26 13:38:57.548192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.230 [2024-11-26 13:38:57.548223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:09.230 [2024-11-26 13:38:57.548234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.443 ms 00:26:09.230 [2024-11-26 13:38:57.548242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.230 [2024-11-26 13:38:57.554386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.230 [2024-11-26 13:38:57.554419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:09.230 [2024-11-26 13:38:57.554429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.120 ms 00:26:09.230 [2024-11-26 13:38:57.554437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.230 [2024-11-26 13:38:57.577416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.230 [2024-11-26 13:38:57.577460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:09.230 [2024-11-26 13:38:57.577472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.936 ms 00:26:09.230 [2024-11-26 13:38:57.577480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.230 [2024-11-26 13:38:57.591300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.230 [2024-11-26 13:38:57.591333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:09.230 [2024-11-26 13:38:57.591344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.789 ms 00:26:09.230 [2024-11-26 13:38:57.591353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.230 [2024-11-26 13:38:57.650624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.230 [2024-11-26 13:38:57.650674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:09.230 [2024-11-26 13:38:57.650688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.237 ms 00:26:09.230 [2024-11-26 13:38:57.650696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.230 [2024-11-26 13:38:57.673488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.230 [2024-11-26 13:38:57.673521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:09.230 [2024-11-26 13:38:57.673530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.777 ms 00:26:09.230 [2024-11-26 13:38:57.673545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.230 [2024-11-26 13:38:57.695405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.230 [2024-11-26 13:38:57.695435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:09.230 [2024-11-26 13:38:57.695452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.829 ms 00:26:09.230 [2024-11-26 13:38:57.695459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.230 [2024-11-26 13:38:57.717602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.230 [2024-11-26 13:38:57.717633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:09.230 [2024-11-26 13:38:57.717643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.107 ms 00:26:09.230 [2024-11-26 13:38:57.717650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.230 [2024-11-26 13:38:57.739311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.230 [2024-11-26 13:38:57.739343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:09.230 [2024-11-26 13:38:57.739353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.610 ms 00:26:09.230 [2024-11-26 13:38:57.739360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.230 [2024-11-26 13:38:57.739389] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:09.230 [2024-11-26 13:38:57.739404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129280 / 261120 wr_cnt: 1 state: open 00:26:09.230 [2024-11-26 13:38:57.739414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:09.230 [2024-11-26 13:38:57.739624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.739999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:09.231 [2024-11-26 13:38:57.740116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:09.232 [2024-11-26 13:38:57.740124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:09.232 [2024-11-26 13:38:57.740131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:09.232 [2024-11-26 13:38:57.740138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:09.232 [2024-11-26 13:38:57.740145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:09.232 [2024-11-26 13:38:57.740152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:09.232 [2024-11-26 13:38:57.740161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:09.232 [2024-11-26 13:38:57.740177] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:09.232 [2024-11-26 13:38:57.740184] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e55ade6-60d3-44a5-9469-fbf187dae141 00:26:09.232 [2024-11-26 13:38:57.740201] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129280 00:26:09.232 [2024-11-26 13:38:57.740208] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130240 00:26:09.232 [2024-11-26 13:38:57.740215] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129280 00:26:09.232 [2024-11-26 13:38:57.740223] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:26:09.232 [2024-11-26 13:38:57.740230] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:09.232 [2024-11-26 13:38:57.740238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:09.232 [2024-11-26 13:38:57.740245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:09.232 [2024-11-26 13:38:57.740251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:09.232 [2024-11-26 13:38:57.740258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:09.232 [2024-11-26 13:38:57.740264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.232 [2024-11-26 13:38:57.740272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:09.232 [2024-11-26 13:38:57.740280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:26:09.232 [2024-11-26 13:38:57.740286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.232 [2024-11-26 13:38:57.752478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.232 [2024-11-26 13:38:57.752508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:09.232 [2024-11-26 13:38:57.752518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.177 ms 00:26:09.232 [2024-11-26 13:38:57.752525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.232 [2024-11-26 13:38:57.752858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:09.232 [2024-11-26 13:38:57.752872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:09.232 [2024-11-26 13:38:57.752880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:26:09.232 [2024-11-26 13:38:57.752891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.232 [2024-11-26 13:38:57.785178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.232 [2024-11-26 13:38:57.785217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:09.232 [2024-11-26 13:38:57.785227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.232 [2024-11-26 13:38:57.785235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.232 [2024-11-26 13:38:57.785295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.232 [2024-11-26 13:38:57.785303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:09.232 [2024-11-26 13:38:57.785311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.232 [2024-11-26 13:38:57.785320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.232 [2024-11-26 13:38:57.785370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.232 [2024-11-26 13:38:57.785379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:09.232 [2024-11-26 13:38:57.785387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.232 [2024-11-26 13:38:57.785394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.232 [2024-11-26 13:38:57.785408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.232 [2024-11-26 13:38:57.785416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:09.232 [2024-11-26 13:38:57.785423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.232 [2024-11-26 13:38:57.785430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.491 [2024-11-26 13:38:57.861095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.491 [2024-11-26 13:38:57.861141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:09.491 [2024-11-26 13:38:57.861152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.491 [2024-11-26 13:38:57.861159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.491 [2024-11-26 13:38:57.923459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.491 [2024-11-26 13:38:57.923523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:09.491 [2024-11-26 13:38:57.923534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.491 [2024-11-26 13:38:57.923546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.491 [2024-11-26 13:38:57.923612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.491 [2024-11-26 13:38:57.923621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:09.491 [2024-11-26 13:38:57.923629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.491 [2024-11-26 13:38:57.923636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.491 [2024-11-26 13:38:57.923669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.491 [2024-11-26 13:38:57.923678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:09.491 [2024-11-26 13:38:57.923685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.491 [2024-11-26 13:38:57.923691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.491 [2024-11-26 13:38:57.923779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.491 [2024-11-26 13:38:57.923789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:09.491 [2024-11-26 13:38:57.923797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.491 [2024-11-26 13:38:57.923804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.491 [2024-11-26 13:38:57.923831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.491 [2024-11-26 13:38:57.923839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:09.491 [2024-11-26 13:38:57.923847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.491 [2024-11-26 13:38:57.923854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.491 [2024-11-26 13:38:57.923888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.491 [2024-11-26 13:38:57.923902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:09.491 [2024-11-26 13:38:57.923910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.491 [2024-11-26 13:38:57.923917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.491 [2024-11-26 13:38:57.923955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:09.491 [2024-11-26 13:38:57.923964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:09.491 [2024-11-26 13:38:57.923972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:09.491 [2024-11-26 13:38:57.923979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:09.491 [2024-11-26 13:38:57.924087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.131 ms, result 0 00:26:10.872 00:26:10.872 00:26:10.872 13:38:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:12.780 13:39:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:13.039 [2024-11-26 13:39:01.395150] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:26:13.039 [2024-11-26 13:39:01.395274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80781 ] 00:26:13.039 [2024-11-26 13:39:01.557771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.297 [2024-11-26 13:39:01.698034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.555 [2024-11-26 13:39:01.954672] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:13.555 [2024-11-26 13:39:01.954735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:13.555 [2024-11-26 13:39:02.108341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.555 [2024-11-26 13:39:02.108402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:13.555 [2024-11-26 13:39:02.108416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:13.555 [2024-11-26 13:39:02.108423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.555 [2024-11-26 13:39:02.108484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.555 [2024-11-26 13:39:02.108496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:13.555 [2024-11-26 13:39:02.108504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:13.555 [2024-11-26 13:39:02.108511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.555 [2024-11-26 13:39:02.108530] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:13.556 [2024-11-26 13:39:02.109246] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:13.556 [2024-11-26 13:39:02.109268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.556 [2024-11-26 13:39:02.109276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:13.556 [2024-11-26 13:39:02.109284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:26:13.556 [2024-11-26 13:39:02.109292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.556 [2024-11-26 13:39:02.110380] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:13.814 [2024-11-26 13:39:02.122646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.814 [2024-11-26 13:39:02.122680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:13.814 [2024-11-26 13:39:02.122692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.268 ms 00:26:13.814 [2024-11-26 13:39:02.122700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.814 [2024-11-26 13:39:02.122755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.814 [2024-11-26 13:39:02.122764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:13.814 [2024-11-26 13:39:02.122772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:13.814 [2024-11-26 13:39:02.122779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.814 [2024-11-26 13:39:02.127900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.814 [2024-11-26 13:39:02.127932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:13.814 [2024-11-26 13:39:02.127941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.064 ms 00:26:13.814 [2024-11-26 13:39:02.127952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.814 [2024-11-26 13:39:02.128018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.814 [2024-11-26 13:39:02.128027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:13.814 [2024-11-26 13:39:02.128034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:13.814 [2024-11-26 13:39:02.128042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.814 [2024-11-26 13:39:02.128082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.814 [2024-11-26 13:39:02.128091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:13.814 [2024-11-26 13:39:02.128099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:13.814 [2024-11-26 13:39:02.128106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.814 [2024-11-26 13:39:02.128129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:13.814 [2024-11-26 13:39:02.131317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.814 [2024-11-26 13:39:02.131345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:13.814 [2024-11-26 13:39:02.131357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:26:13.814 [2024-11-26 13:39:02.131364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.814 [2024-11-26 13:39:02.131391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.814 [2024-11-26 13:39:02.131399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:13.814 [2024-11-26 13:39:02.131406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:13.814 [2024-11-26 13:39:02.131413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.814 [2024-11-26 13:39:02.131432] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:13.814 [2024-11-26 13:39:02.131458] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:13.814 [2024-11-26 13:39:02.131500] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:13.814 [2024-11-26 13:39:02.131517] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:13.814 [2024-11-26 13:39:02.131618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:13.814 [2024-11-26 13:39:02.131628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:13.814 [2024-11-26 13:39:02.131639] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:13.814 [2024-11-26 13:39:02.131648] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:13.814 [2024-11-26 13:39:02.131657] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:13.814 [2024-11-26 13:39:02.131664] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:13.814 [2024-11-26 13:39:02.131672] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:13.814 [2024-11-26 13:39:02.131679] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:13.814 [2024-11-26 13:39:02.131688] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:13.814 [2024-11-26 13:39:02.131695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.814 [2024-11-26 13:39:02.131703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:13.814 [2024-11-26 13:39:02.131710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:26:13.815 [2024-11-26 13:39:02.131717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.815 [2024-11-26 13:39:02.131799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.815 [2024-11-26 13:39:02.131807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:13.815 [2024-11-26 13:39:02.131813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:13.815 [2024-11-26 13:39:02.131820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.815 [2024-11-26 13:39:02.131925] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:13.815 [2024-11-26 13:39:02.131941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:13.815 [2024-11-26 13:39:02.131950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:13.815 [2024-11-26 13:39:02.131957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.815 [2024-11-26 13:39:02.131965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:13.815 [2024-11-26 13:39:02.131971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:13.815 [2024-11-26 13:39:02.131978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:13.815 [2024-11-26 13:39:02.131985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:13.815 [2024-11-26 13:39:02.131992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:13.815 [2024-11-26 13:39:02.131998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:13.815 [2024-11-26 13:39:02.132005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:13.815 [2024-11-26 13:39:02.132012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:13.815 [2024-11-26 13:39:02.132019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:13.815 [2024-11-26 13:39:02.132032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:13.815 [2024-11-26 13:39:02.132039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:13.815 [2024-11-26 13:39:02.132045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.815 [2024-11-26 13:39:02.132051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:13.815 [2024-11-26 13:39:02.132058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:13.815 [2024-11-26 13:39:02.132064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.815 [2024-11-26 13:39:02.132071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:13.815 [2024-11-26 13:39:02.132077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:13.815 [2024-11-26 13:39:02.132084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.815 [2024-11-26 13:39:02.132090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:13.815 [2024-11-26 13:39:02.132097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:13.815 [2024-11-26 13:39:02.132103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.815 [2024-11-26 13:39:02.132109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:13.815 [2024-11-26 13:39:02.132115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:13.815 [2024-11-26 13:39:02.132122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.815 [2024-11-26 13:39:02.132129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:13.815 [2024-11-26 13:39:02.132135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:13.815 [2024-11-26 13:39:02.132141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:13.815 [2024-11-26 13:39:02.132148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:13.815 [2024-11-26 13:39:02.132154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:13.815 [2024-11-26 13:39:02.132160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:13.815 [2024-11-26 13:39:02.132167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:13.815 [2024-11-26 13:39:02.132173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:13.815 [2024-11-26 13:39:02.132179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:13.815 [2024-11-26 13:39:02.132185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:13.815 [2024-11-26 13:39:02.132191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:13.815 [2024-11-26 13:39:02.132197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.815 [2024-11-26 13:39:02.132204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:13.815 [2024-11-26 13:39:02.132210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:13.815 [2024-11-26 13:39:02.132216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.815 [2024-11-26 13:39:02.132224] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:13.815 [2024-11-26 13:39:02.132234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:13.815 [2024-11-26 13:39:02.132241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:13.815 [2024-11-26 13:39:02.132247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:13.815 [2024-11-26 13:39:02.132254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:13.815 [2024-11-26 13:39:02.132261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:13.815 [2024-11-26 13:39:02.132267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:13.815 [2024-11-26 13:39:02.132274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:13.815 [2024-11-26 13:39:02.132280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:13.815 [2024-11-26 13:39:02.132286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:13.815 [2024-11-26 13:39:02.132294] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:13.815 [2024-11-26 13:39:02.132303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.815 [2024-11-26 13:39:02.132313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:13.815 [2024-11-26 13:39:02.132320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:13.815 [2024-11-26 13:39:02.132326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:13.815 [2024-11-26 13:39:02.132333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:13.815 [2024-11-26 13:39:02.132340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:13.815 [2024-11-26 13:39:02.132346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:13.815 [2024-11-26 13:39:02.132353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:13.815 [2024-11-26 13:39:02.132360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:13.815 [2024-11-26 13:39:02.132367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:13.815 [2024-11-26 13:39:02.132374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:13.815 [2024-11-26 13:39:02.132380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:13.815 [2024-11-26 13:39:02.132387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:13.815 [2024-11-26 13:39:02.132394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:13.815 [2024-11-26 13:39:02.132400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:13.815 [2024-11-26 13:39:02.132408] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:13.815 [2024-11-26 13:39:02.132415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.815 [2024-11-26 13:39:02.132423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:13.815 [2024-11-26 13:39:02.132430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:13.815 [2024-11-26 13:39:02.132437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:13.815 [2024-11-26 13:39:02.132459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:13.815 [2024-11-26 13:39:02.132468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.815 [2024-11-26 13:39:02.132476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:13.815 [2024-11-26 13:39:02.132483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:26:13.815 [2024-11-26 13:39:02.132490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.815 [2024-11-26 13:39:02.158458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.815 [2024-11-26 13:39:02.158497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:13.815 [2024-11-26 13:39:02.158508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.912 ms 00:26:13.815 [2024-11-26 13:39:02.158519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.815 [2024-11-26 13:39:02.158604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.815 [2024-11-26 13:39:02.158612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:13.815 [2024-11-26 13:39:02.158620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:13.815 [2024-11-26 13:39:02.158627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.815 [2024-11-26 13:39:02.204824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.815 [2024-11-26 13:39:02.204873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:13.815 [2024-11-26 13:39:02.204887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.137 ms 00:26:13.815 [2024-11-26 13:39:02.204895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.815 [2024-11-26 13:39:02.204944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.815 [2024-11-26 13:39:02.204954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:13.815 [2024-11-26 13:39:02.204965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:13.815 [2024-11-26 13:39:02.204973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.815 [2024-11-26 13:39:02.205343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.205368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:13.816 [2024-11-26 13:39:02.205378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:26:13.816 [2024-11-26 13:39:02.205386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.205531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.205546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:13.816 [2024-11-26 13:39:02.205559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:26:13.816 [2024-11-26 13:39:02.205566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.218602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.218634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:13.816 [2024-11-26 13:39:02.218647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.017 ms 00:26:13.816 [2024-11-26 13:39:02.218655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.230827] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:13.816 [2024-11-26 13:39:02.230863] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:13.816 [2024-11-26 13:39:02.230874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.230882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:13.816 [2024-11-26 13:39:02.230890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.126 ms 00:26:13.816 [2024-11-26 13:39:02.230898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.255090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.255129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:13.816 [2024-11-26 13:39:02.255140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.155 ms 00:26:13.816 [2024-11-26 13:39:02.255147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.266402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.266438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:13.816 [2024-11-26 13:39:02.266455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.213 ms 00:26:13.816 [2024-11-26 13:39:02.266462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.277812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.277846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:13.816 [2024-11-26 13:39:02.277856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.319 ms 00:26:13.816 [2024-11-26 13:39:02.277863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.278482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.278517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:13.816 [2024-11-26 13:39:02.278529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:26:13.816 [2024-11-26 13:39:02.278536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.333549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.333608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:13.816 [2024-11-26 13:39:02.333625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.994 ms 00:26:13.816 [2024-11-26 13:39:02.333633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.344036] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:13.816 [2024-11-26 13:39:02.346646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.346677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:13.816 [2024-11-26 13:39:02.346690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.961 ms 00:26:13.816 [2024-11-26 13:39:02.346698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.346795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.346806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:13.816 [2024-11-26 13:39:02.346815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:13.816 [2024-11-26 13:39:02.346824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.348291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.348326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:13.816 [2024-11-26 13:39:02.348335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:26:13.816 [2024-11-26 13:39:02.348343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.348367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.348376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:13.816 [2024-11-26 13:39:02.348385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:13.816 [2024-11-26 13:39:02.348392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.348427] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:13.816 [2024-11-26 13:39:02.348437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.348455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:13.816 [2024-11-26 13:39:02.348463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:13.816 [2024-11-26 13:39:02.348470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.371738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.371915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:13.816 [2024-11-26 13:39:02.371933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.250 ms 00:26:13.816 [2024-11-26 13:39:02.371947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.372023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.816 [2024-11-26 13:39:02.372033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:13.816 [2024-11-26 13:39:02.372042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:13.816 [2024-11-26 13:39:02.372049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.816 [2024-11-26 13:39:02.372960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.209 ms, result 0 00:26:15.193  [2024-11-26T13:39:04.706Z] Copying: 1016/1048576 [kB] (1016 kBps) [2024-11-26T13:39:05.644Z] Copying: 5496/1048576 [kB] (4480 kBps) [2024-11-26T13:39:06.588Z] Copying: 48/1024 [MB] (43 MBps) [2024-11-26T13:39:07.986Z] Copying: 87/1024 [MB] (38 MBps) [2024-11-26T13:39:08.588Z] Copying: 120/1024 [MB] (33 MBps) [2024-11-26T13:39:09.976Z] Copying: 145/1024 [MB] (25 MBps) [2024-11-26T13:39:10.920Z] Copying: 173/1024 [MB] (27 MBps) [2024-11-26T13:39:11.892Z] Copying: 198/1024 [MB] (25 MBps) [2024-11-26T13:39:12.837Z] Copying: 226/1024 [MB] (27 MBps) [2024-11-26T13:39:13.778Z] Copying: 257/1024 [MB] (30 MBps) [2024-11-26T13:39:14.720Z] Copying: 298/1024 [MB] (41 MBps) [2024-11-26T13:39:15.661Z] Copying: 338/1024 [MB] (39 MBps) [2024-11-26T13:39:16.606Z] Copying: 385/1024 [MB] (47 MBps) [2024-11-26T13:39:17.993Z] Copying: 424/1024 [MB] (39 MBps) [2024-11-26T13:39:18.566Z] Copying: 473/1024 [MB] (48 MBps) [2024-11-26T13:39:19.958Z] Copying: 521/1024 [MB] (47 MBps) [2024-11-26T13:39:20.902Z] Copying: 569/1024 [MB] (48 MBps) [2024-11-26T13:39:21.844Z] Copying: 618/1024 [MB] (48 MBps) [2024-11-26T13:39:22.785Z] Copying: 668/1024 [MB] (49 MBps) [2024-11-26T13:39:23.725Z] Copying: 713/1024 [MB] (45 MBps) [2024-11-26T13:39:24.669Z] Copying: 748/1024 [MB] (34 MBps) [2024-11-26T13:39:25.613Z] Copying: 797/1024 [MB] (48 MBps) [2024-11-26T13:39:26.988Z] Copying: 846/1024 [MB] (49 MBps) [2024-11-26T13:39:27.923Z] Copying: 897/1024 [MB] (50 MBps) [2024-11-26T13:39:28.866Z] Copying: 948/1024 [MB] (51 MBps) [2024-11-26T13:39:29.127Z] Copying: 1003/1024 [MB] (55 MBps) [2024-11-26T13:39:30.512Z] Copying: 1024/1024 [MB] (average 38 MBps)[2024-11-26 13:39:30.256318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.256389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:41.942 [2024-11-26 13:39:30.256408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:41.942 [2024-11-26 13:39:30.256419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.256469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:41.942 [2024-11-26 13:39:30.260070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.260116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:41.942 [2024-11-26 13:39:30.260130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.580 ms 00:26:41.942 [2024-11-26 13:39:30.260141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.260458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.260476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:41.942 [2024-11-26 13:39:30.260487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:26:41.942 [2024-11-26 13:39:30.260498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.272028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.272068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:41.942 [2024-11-26 13:39:30.272079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.510 ms 00:26:41.942 [2024-11-26 13:39:30.272087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.278398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.278432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:41.942 [2024-11-26 13:39:30.278459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.295 ms 00:26:41.942 [2024-11-26 13:39:30.278467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.302649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.302690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:41.942 [2024-11-26 13:39:30.302701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.131 ms 00:26:41.942 [2024-11-26 13:39:30.302709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.316458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.316496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:41.942 [2024-11-26 13:39:30.316508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.727 ms 00:26:41.942 [2024-11-26 13:39:30.316516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.318244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.318276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:41.942 [2024-11-26 13:39:30.318286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:26:41.942 [2024-11-26 13:39:30.318293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.341836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.341872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:41.942 [2024-11-26 13:39:30.341882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.523 ms 00:26:41.942 [2024-11-26 13:39:30.341890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.364832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.364870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:41.942 [2024-11-26 13:39:30.364880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.922 ms 00:26:41.942 [2024-11-26 13:39:30.364888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.387183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.387217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:41.942 [2024-11-26 13:39:30.387227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.275 ms 00:26:41.942 [2024-11-26 13:39:30.387235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.410074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.942 [2024-11-26 13:39:30.410111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:41.942 [2024-11-26 13:39:30.410121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.798 ms 00:26:41.942 [2024-11-26 13:39:30.410129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.942 [2024-11-26 13:39:30.410147] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:41.942 [2024-11-26 13:39:30.410160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:41.942 [2024-11-26 13:39:30.410170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:41.942 [2024-11-26 13:39:30.410178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:41.942 [2024-11-26 13:39:30.410186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:41.942 [2024-11-26 13:39:30.410194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:41.942 [2024-11-26 13:39:30.410201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:41.942 [2024-11-26 13:39:30.410209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:41.942 [2024-11-26 13:39:30.410216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:41.943 [2024-11-26 13:39:30.410777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:41.944 [2024-11-26 13:39:30.410947] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:41.944 [2024-11-26 13:39:30.410954] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e55ade6-60d3-44a5-9469-fbf187dae141 00:26:41.944 [2024-11-26 13:39:30.410962] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:41.944 [2024-11-26 13:39:30.410969] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135360 00:26:41.944 [2024-11-26 13:39:30.410986] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133376 00:26:41.944 [2024-11-26 13:39:30.410994] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:26:41.944 [2024-11-26 13:39:30.411001] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:41.944 [2024-11-26 13:39:30.411015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:41.944 [2024-11-26 13:39:30.411022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:41.944 [2024-11-26 13:39:30.411029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:41.944 [2024-11-26 13:39:30.411035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:41.944 [2024-11-26 13:39:30.411042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.944 [2024-11-26 13:39:30.411051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:41.944 [2024-11-26 13:39:30.411059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:26:41.944 [2024-11-26 13:39:30.411065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.944 [2024-11-26 13:39:30.423385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.944 [2024-11-26 13:39:30.423423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:41.944 [2024-11-26 13:39:30.423433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.304 ms 00:26:41.944 [2024-11-26 13:39:30.423451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.944 [2024-11-26 13:39:30.423815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.944 [2024-11-26 13:39:30.423832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:41.944 [2024-11-26 13:39:30.423840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:26:41.944 [2024-11-26 13:39:30.423848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.944 [2024-11-26 13:39:30.456293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.944 [2024-11-26 13:39:30.456333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:41.944 [2024-11-26 13:39:30.456343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.944 [2024-11-26 13:39:30.456351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.944 [2024-11-26 13:39:30.456404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.944 [2024-11-26 13:39:30.456413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:41.944 [2024-11-26 13:39:30.456421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.944 [2024-11-26 13:39:30.456428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.944 [2024-11-26 13:39:30.456508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.944 [2024-11-26 13:39:30.456519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:41.944 [2024-11-26 13:39:30.456526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.944 [2024-11-26 13:39:30.456533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.944 [2024-11-26 13:39:30.456549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.944 [2024-11-26 13:39:30.456556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:41.944 [2024-11-26 13:39:30.456564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.944 [2024-11-26 13:39:30.456571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.203 [2024-11-26 13:39:30.534363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.203 [2024-11-26 13:39:30.534414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.203 [2024-11-26 13:39:30.534425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.203 [2024-11-26 13:39:30.534432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.203 [2024-11-26 13:39:30.598352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.203 [2024-11-26 13:39:30.598401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.203 [2024-11-26 13:39:30.598412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.203 [2024-11-26 13:39:30.598419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.203 [2024-11-26 13:39:30.598482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.203 [2024-11-26 13:39:30.598496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.203 [2024-11-26 13:39:30.598505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.203 [2024-11-26 13:39:30.598512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.203 [2024-11-26 13:39:30.598557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.203 [2024-11-26 13:39:30.598566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.203 [2024-11-26 13:39:30.598574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.203 [2024-11-26 13:39:30.598581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.203 [2024-11-26 13:39:30.598662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.203 [2024-11-26 13:39:30.598671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.203 [2024-11-26 13:39:30.598682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.203 [2024-11-26 13:39:30.598689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.203 [2024-11-26 13:39:30.598717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.203 [2024-11-26 13:39:30.598726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:42.203 [2024-11-26 13:39:30.598733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.203 [2024-11-26 13:39:30.598740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.203 [2024-11-26 13:39:30.598772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.203 [2024-11-26 13:39:30.598781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.203 [2024-11-26 13:39:30.598791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.203 [2024-11-26 13:39:30.598798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.203 [2024-11-26 13:39:30.598834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:42.203 [2024-11-26 13:39:30.598843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.203 [2024-11-26 13:39:30.598850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:42.203 [2024-11-26 13:39:30.598857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.203 [2024-11-26 13:39:30.598963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.625 ms, result 0 00:26:42.771 00:26:42.771 00:26:42.771 13:39:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:45.361 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:45.361 13:39:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:45.361 [2024-11-26 13:39:33.435045] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:26:45.361 [2024-11-26 13:39:33.435322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81114 ] 00:26:45.361 [2024-11-26 13:39:33.594812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.361 [2024-11-26 13:39:33.696738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.620 [2024-11-26 13:39:33.950782] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:45.620 [2024-11-26 13:39:33.950848] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:45.620 [2024-11-26 13:39:34.104362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.104425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:45.620 [2024-11-26 13:39:34.104439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:45.620 [2024-11-26 13:39:34.104458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.104502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.104514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:45.620 [2024-11-26 13:39:34.104522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:45.620 [2024-11-26 13:39:34.104530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.104549] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:45.620 [2024-11-26 13:39:34.105203] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:45.620 [2024-11-26 13:39:34.105223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.105231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:45.620 [2024-11-26 13:39:34.105240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:26:45.620 [2024-11-26 13:39:34.105247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.106469] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:45.620 [2024-11-26 13:39:34.118693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.118729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:45.620 [2024-11-26 13:39:34.118741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.225 ms 00:26:45.620 [2024-11-26 13:39:34.118749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.118805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.118814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:45.620 [2024-11-26 13:39:34.118822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:45.620 [2024-11-26 13:39:34.118830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.123917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.123950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:45.620 [2024-11-26 13:39:34.123960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:26:45.620 [2024-11-26 13:39:34.123971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.124044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.124053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:45.620 [2024-11-26 13:39:34.124062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:45.620 [2024-11-26 13:39:34.124070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.124108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.124117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:45.620 [2024-11-26 13:39:34.124125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:45.620 [2024-11-26 13:39:34.124132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.124156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:45.620 [2024-11-26 13:39:34.127376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.127405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:45.620 [2024-11-26 13:39:34.127416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:26:45.620 [2024-11-26 13:39:34.127424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.127464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.127473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:45.620 [2024-11-26 13:39:34.127481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:45.620 [2024-11-26 13:39:34.127488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.127506] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:45.620 [2024-11-26 13:39:34.127541] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:45.620 [2024-11-26 13:39:34.127575] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:45.620 [2024-11-26 13:39:34.127592] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:45.620 [2024-11-26 13:39:34.127692] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:45.620 [2024-11-26 13:39:34.127702] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:45.620 [2024-11-26 13:39:34.127712] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:45.620 [2024-11-26 13:39:34.127721] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:45.620 [2024-11-26 13:39:34.127730] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:45.620 [2024-11-26 13:39:34.127738] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:45.620 [2024-11-26 13:39:34.127746] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:45.620 [2024-11-26 13:39:34.127753] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:45.620 [2024-11-26 13:39:34.127762] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:45.620 [2024-11-26 13:39:34.127769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.127776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:45.620 [2024-11-26 13:39:34.127783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:26:45.620 [2024-11-26 13:39:34.127790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.620 [2024-11-26 13:39:34.127872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.620 [2024-11-26 13:39:34.127880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:45.620 [2024-11-26 13:39:34.127887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:45.620 [2024-11-26 13:39:34.127894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.621 [2024-11-26 13:39:34.127997] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:45.621 [2024-11-26 13:39:34.128014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:45.621 [2024-11-26 13:39:34.128022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:45.621 [2024-11-26 13:39:34.128030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:45.621 [2024-11-26 13:39:34.128045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:45.621 [2024-11-26 13:39:34.128058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:45.621 [2024-11-26 13:39:34.128064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:45.621 [2024-11-26 13:39:34.128078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:45.621 [2024-11-26 13:39:34.128085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:45.621 [2024-11-26 13:39:34.128092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:45.621 [2024-11-26 13:39:34.128105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:45.621 [2024-11-26 13:39:34.128112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:45.621 [2024-11-26 13:39:34.128119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:45.621 [2024-11-26 13:39:34.128131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:45.621 [2024-11-26 13:39:34.128138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:45.621 [2024-11-26 13:39:34.128151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.621 [2024-11-26 13:39:34.128163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:45.621 [2024-11-26 13:39:34.128169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.621 [2024-11-26 13:39:34.128182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:45.621 [2024-11-26 13:39:34.128188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.621 [2024-11-26 13:39:34.128201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:45.621 [2024-11-26 13:39:34.128208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.621 [2024-11-26 13:39:34.128220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:45.621 [2024-11-26 13:39:34.128227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:45.621 [2024-11-26 13:39:34.128239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:45.621 [2024-11-26 13:39:34.128246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:45.621 [2024-11-26 13:39:34.128252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:45.621 [2024-11-26 13:39:34.128259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:45.621 [2024-11-26 13:39:34.128265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:45.621 [2024-11-26 13:39:34.128271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:45.621 [2024-11-26 13:39:34.128286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:45.621 [2024-11-26 13:39:34.128292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128299] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:45.621 [2024-11-26 13:39:34.128306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:45.621 [2024-11-26 13:39:34.128314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:45.621 [2024-11-26 13:39:34.128321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.621 [2024-11-26 13:39:34.128329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:45.621 [2024-11-26 13:39:34.128336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:45.621 [2024-11-26 13:39:34.128342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:45.621 [2024-11-26 13:39:34.128349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:45.621 [2024-11-26 13:39:34.128355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:45.621 [2024-11-26 13:39:34.128361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:45.621 [2024-11-26 13:39:34.128369] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:45.621 [2024-11-26 13:39:34.128378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:45.621 [2024-11-26 13:39:34.128388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:45.621 [2024-11-26 13:39:34.128395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:45.621 [2024-11-26 13:39:34.128402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:45.621 [2024-11-26 13:39:34.128409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:45.621 [2024-11-26 13:39:34.128415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:45.621 [2024-11-26 13:39:34.128422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:45.621 [2024-11-26 13:39:34.128429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:45.621 [2024-11-26 13:39:34.128435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:45.621 [2024-11-26 13:39:34.128461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:45.621 [2024-11-26 13:39:34.128468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:45.621 [2024-11-26 13:39:34.128475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:45.621 [2024-11-26 13:39:34.128482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:45.621 [2024-11-26 13:39:34.128490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:45.621 [2024-11-26 13:39:34.128497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:45.621 [2024-11-26 13:39:34.128504] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:45.621 [2024-11-26 13:39:34.128512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:45.621 [2024-11-26 13:39:34.128519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:45.621 [2024-11-26 13:39:34.128527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:45.621 [2024-11-26 13:39:34.128534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:45.621 [2024-11-26 13:39:34.128542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:45.621 [2024-11-26 13:39:34.128549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.621 [2024-11-26 13:39:34.128556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:45.621 [2024-11-26 13:39:34.128563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:26:45.621 [2024-11-26 13:39:34.128571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.621 [2024-11-26 13:39:34.154580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.621 [2024-11-26 13:39:34.154618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:45.621 [2024-11-26 13:39:34.154628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.953 ms 00:26:45.621 [2024-11-26 13:39:34.154639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.621 [2024-11-26 13:39:34.154726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.621 [2024-11-26 13:39:34.154735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:45.621 [2024-11-26 13:39:34.154742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:45.621 [2024-11-26 13:39:34.154750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.879 [2024-11-26 13:39:34.208040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.879 [2024-11-26 13:39:34.208099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:45.879 [2024-11-26 13:39:34.208111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.232 ms 00:26:45.879 [2024-11-26 13:39:34.208119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.879 [2024-11-26 13:39:34.208171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.879 [2024-11-26 13:39:34.208180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:45.879 [2024-11-26 13:39:34.208192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:45.879 [2024-11-26 13:39:34.208199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.879 [2024-11-26 13:39:34.208592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.879 [2024-11-26 13:39:34.208617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:45.879 [2024-11-26 13:39:34.208627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:26:45.879 [2024-11-26 13:39:34.208634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.879 [2024-11-26 13:39:34.208767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.879 [2024-11-26 13:39:34.208781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:45.879 [2024-11-26 13:39:34.208794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:26:45.879 [2024-11-26 13:39:34.208802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.879 [2024-11-26 13:39:34.221779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.879 [2024-11-26 13:39:34.221813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:45.879 [2024-11-26 13:39:34.221825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.959 ms 00:26:45.879 [2024-11-26 13:39:34.221832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.234142] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:45.880 [2024-11-26 13:39:34.234178] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:45.880 [2024-11-26 13:39:34.234188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.234196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:45.880 [2024-11-26 13:39:34.234204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.262 ms 00:26:45.880 [2024-11-26 13:39:34.234211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.258123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.258160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:45.880 [2024-11-26 13:39:34.258171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.874 ms 00:26:45.880 [2024-11-26 13:39:34.258179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.269459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.269492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:45.880 [2024-11-26 13:39:34.269502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.237 ms 00:26:45.880 [2024-11-26 13:39:34.269509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.280665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.280697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:45.880 [2024-11-26 13:39:34.280707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.126 ms 00:26:45.880 [2024-11-26 13:39:34.280714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.281308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.281333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:45.880 [2024-11-26 13:39:34.281345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:26:45.880 [2024-11-26 13:39:34.281353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.335925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.335979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:45.880 [2024-11-26 13:39:34.335997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.554 ms 00:26:45.880 [2024-11-26 13:39:34.336005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.346291] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:45.880 [2024-11-26 13:39:34.348731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.348761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:45.880 [2024-11-26 13:39:34.348773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.678 ms 00:26:45.880 [2024-11-26 13:39:34.348781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.348877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.348888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:45.880 [2024-11-26 13:39:34.348896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:45.880 [2024-11-26 13:39:34.348906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.349473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.349504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:45.880 [2024-11-26 13:39:34.349513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:26:45.880 [2024-11-26 13:39:34.349520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.349543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.349551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:45.880 [2024-11-26 13:39:34.349559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:45.880 [2024-11-26 13:39:34.349566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.349600] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:45.880 [2024-11-26 13:39:34.349610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.349617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:45.880 [2024-11-26 13:39:34.349625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:45.880 [2024-11-26 13:39:34.349632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.372568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.372603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:45.880 [2024-11-26 13:39:34.372614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.919 ms 00:26:45.880 [2024-11-26 13:39:34.372626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.372695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.880 [2024-11-26 13:39:34.372705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:45.880 [2024-11-26 13:39:34.372713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:45.880 [2024-11-26 13:39:34.372720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.880 [2024-11-26 13:39:34.373965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 269.195 ms, result 0 00:26:47.252  [2024-11-26T13:39:36.755Z] Copying: 50/1024 [MB] (50 MBps) [2024-11-26T13:39:37.687Z] Copying: 100/1024 [MB] (49 MBps) [2024-11-26T13:39:38.620Z] Copying: 148/1024 [MB] (47 MBps) [2024-11-26T13:39:39.551Z] Copying: 195/1024 [MB] (46 MBps) [2024-11-26T13:39:40.921Z] Copying: 244/1024 [MB] (49 MBps) [2024-11-26T13:39:41.857Z] Copying: 283/1024 [MB] (38 MBps) [2024-11-26T13:39:42.789Z] Copying: 329/1024 [MB] (46 MBps) [2024-11-26T13:39:43.721Z] Copying: 378/1024 [MB] (49 MBps) [2024-11-26T13:39:44.653Z] Copying: 426/1024 [MB] (47 MBps) [2024-11-26T13:39:45.585Z] Copying: 473/1024 [MB] (47 MBps) [2024-11-26T13:39:46.960Z] Copying: 519/1024 [MB] (45 MBps) [2024-11-26T13:39:47.893Z] Copying: 571/1024 [MB] (51 MBps) [2024-11-26T13:39:48.826Z] Copying: 620/1024 [MB] (49 MBps) [2024-11-26T13:39:49.766Z] Copying: 668/1024 [MB] (47 MBps) [2024-11-26T13:39:50.700Z] Copying: 718/1024 [MB] (50 MBps) [2024-11-26T13:39:51.633Z] Copying: 769/1024 [MB] (51 MBps) [2024-11-26T13:39:52.566Z] Copying: 815/1024 [MB] (45 MBps) [2024-11-26T13:39:53.940Z] Copying: 853/1024 [MB] (38 MBps) [2024-11-26T13:39:54.876Z] Copying: 897/1024 [MB] (43 MBps) [2024-11-26T13:39:55.811Z] Copying: 925/1024 [MB] (27 MBps) [2024-11-26T13:39:56.746Z] Copying: 952/1024 [MB] (27 MBps) [2024-11-26T13:39:57.314Z] Copying: 996/1024 [MB] (43 MBps) [2024-11-26T13:39:57.314Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-11-26 13:39:57.170561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.170622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:08.744 [2024-11-26 13:39:57.170636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:08.744 [2024-11-26 13:39:57.170645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.744 [2024-11-26 13:39:57.170667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:08.744 [2024-11-26 13:39:57.173264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.173291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:08.744 [2024-11-26 13:39:57.173307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:27:08.744 [2024-11-26 13:39:57.173316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.744 [2024-11-26 13:39:57.173539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.173555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:08.744 [2024-11-26 13:39:57.173564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:27:08.744 [2024-11-26 13:39:57.173571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.744 [2024-11-26 13:39:57.176990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.177006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:08.744 [2024-11-26 13:39:57.177015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.405 ms 00:27:08.744 [2024-11-26 13:39:57.177026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.744 [2024-11-26 13:39:57.184177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.184203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:08.744 [2024-11-26 13:39:57.184220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.135 ms 00:27:08.744 [2024-11-26 13:39:57.184227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.744 [2024-11-26 13:39:57.212273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.212310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:08.744 [2024-11-26 13:39:57.212322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.180 ms 00:27:08.744 [2024-11-26 13:39:57.212330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.744 [2024-11-26 13:39:57.226892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.226935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:08.744 [2024-11-26 13:39:57.226947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.536 ms 00:27:08.744 [2024-11-26 13:39:57.226955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.744 [2024-11-26 13:39:57.228186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.228213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:08.744 [2024-11-26 13:39:57.228222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:27:08.744 [2024-11-26 13:39:57.228230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.744 [2024-11-26 13:39:57.251394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.251422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:08.744 [2024-11-26 13:39:57.251432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.150 ms 00:27:08.744 [2024-11-26 13:39:57.251447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.744 [2024-11-26 13:39:57.273870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.273901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:08.744 [2024-11-26 13:39:57.273912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.404 ms 00:27:08.744 [2024-11-26 13:39:57.273919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.744 [2024-11-26 13:39:57.296128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.744 [2024-11-26 13:39:57.296170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:08.744 [2024-11-26 13:39:57.296180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.189 ms 00:27:08.744 [2024-11-26 13:39:57.296187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.005 [2024-11-26 13:39:57.318405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.005 [2024-11-26 13:39:57.318437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:09.005 [2024-11-26 13:39:57.318455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.174 ms 00:27:09.005 [2024-11-26 13:39:57.318462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.005 [2024-11-26 13:39:57.318482] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:09.005 [2024-11-26 13:39:57.318500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:09.005 [2024-11-26 13:39:57.318510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:09.005 [2024-11-26 13:39:57.318519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:09.005 [2024-11-26 13:39:57.318527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:09.005 [2024-11-26 13:39:57.318535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:09.005 [2024-11-26 13:39:57.318542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:09.005 [2024-11-26 13:39:57.318550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:09.005 [2024-11-26 13:39:57.318557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:09.005 [2024-11-26 13:39:57.318564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:09.005 [2024-11-26 13:39:57.318572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:09.005 [2024-11-26 13:39:57.318579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.318988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:09.006 [2024-11-26 13:39:57.319102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:09.007 [2024-11-26 13:39:57.319242] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:09.007 [2024-11-26 13:39:57.319253] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3e55ade6-60d3-44a5-9469-fbf187dae141 00:27:09.007 [2024-11-26 13:39:57.319260] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:09.007 [2024-11-26 13:39:57.319267] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:09.007 [2024-11-26 13:39:57.319274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:09.007 [2024-11-26 13:39:57.319281] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:09.007 [2024-11-26 13:39:57.319293] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:09.007 [2024-11-26 13:39:57.319300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:09.007 [2024-11-26 13:39:57.319307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:09.007 [2024-11-26 13:39:57.319313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:09.007 [2024-11-26 13:39:57.319319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:09.007 [2024-11-26 13:39:57.319326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.007 [2024-11-26 13:39:57.319335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:09.007 [2024-11-26 13:39:57.319344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:27:09.007 [2024-11-26 13:39:57.319350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.331314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.007 [2024-11-26 13:39:57.331341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:09.007 [2024-11-26 13:39:57.331351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.946 ms 00:27:09.007 [2024-11-26 13:39:57.331358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.331718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.007 [2024-11-26 13:39:57.331738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:09.007 [2024-11-26 13:39:57.331750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:27:09.007 [2024-11-26 13:39:57.331757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.364192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.007 [2024-11-26 13:39:57.364231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:09.007 [2024-11-26 13:39:57.364242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.007 [2024-11-26 13:39:57.364249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.364307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.007 [2024-11-26 13:39:57.364319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:09.007 [2024-11-26 13:39:57.364326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.007 [2024-11-26 13:39:57.364333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.364391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.007 [2024-11-26 13:39:57.364404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:09.007 [2024-11-26 13:39:57.364413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.007 [2024-11-26 13:39:57.364420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.364435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.007 [2024-11-26 13:39:57.364454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:09.007 [2024-11-26 13:39:57.364465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.007 [2024-11-26 13:39:57.364472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.440336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.007 [2024-11-26 13:39:57.440374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:09.007 [2024-11-26 13:39:57.440385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.007 [2024-11-26 13:39:57.440392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.502756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.007 [2024-11-26 13:39:57.502795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:09.007 [2024-11-26 13:39:57.502810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.007 [2024-11-26 13:39:57.502817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.502886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.007 [2024-11-26 13:39:57.502896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:09.007 [2024-11-26 13:39:57.502904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.007 [2024-11-26 13:39:57.502911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.502943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.007 [2024-11-26 13:39:57.502952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:09.007 [2024-11-26 13:39:57.502959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.007 [2024-11-26 13:39:57.502968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.503049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.007 [2024-11-26 13:39:57.503058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:09.007 [2024-11-26 13:39:57.503065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.007 [2024-11-26 13:39:57.503072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.007 [2024-11-26 13:39:57.503098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.007 [2024-11-26 13:39:57.503106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:09.007 [2024-11-26 13:39:57.503113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.008 [2024-11-26 13:39:57.503120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.008 [2024-11-26 13:39:57.503155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.008 [2024-11-26 13:39:57.503164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:09.008 [2024-11-26 13:39:57.503171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.008 [2024-11-26 13:39:57.503178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.008 [2024-11-26 13:39:57.503215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.008 [2024-11-26 13:39:57.503224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:09.008 [2024-11-26 13:39:57.503231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.008 [2024-11-26 13:39:57.503238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.008 [2024-11-26 13:39:57.503344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.759 ms, result 0 00:27:09.942 00:27:09.942 00:27:09.942 13:39:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:11.841 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:11.841 13:40:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:11.841 13:40:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:11.841 13:40:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:11.841 13:40:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:12.097 13:40:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:12.097 13:40:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:12.097 13:40:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:12.097 13:40:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79760 00:27:12.097 13:40:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79760 ']' 00:27:12.097 13:40:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 79760 00:27:12.098 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79760) - No such process 00:27:12.098 Process with pid 79760 is not found 00:27:12.098 13:40:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 79760 is not found' 00:27:12.098 13:40:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:12.366 13:40:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:12.366 Remove shared memory files 00:27:12.366 13:40:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:12.366 13:40:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:12.366 13:40:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:12.366 13:40:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:12.366 13:40:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:12.366 13:40:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:12.366 00:27:12.366 real 2m32.245s 00:27:12.366 user 2m51.160s 00:27:12.366 sys 0m23.872s 00:27:12.366 13:40:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:12.366 13:40:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:12.366 ************************************ 00:27:12.366 END TEST ftl_dirty_shutdown 00:27:12.366 ************************************ 00:27:12.366 13:40:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:12.366 13:40:00 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:12.366 13:40:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:12.366 13:40:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:12.366 ************************************ 00:27:12.366 START TEST ftl_upgrade_shutdown 00:27:12.366 ************************************ 00:27:12.366 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:12.366 * Looking for test storage... 00:27:12.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:12.366 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:12.366 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:12.366 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:12.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.627 --rc genhtml_branch_coverage=1 00:27:12.627 --rc genhtml_function_coverage=1 00:27:12.627 --rc genhtml_legend=1 00:27:12.627 --rc geninfo_all_blocks=1 00:27:12.627 --rc geninfo_unexecuted_blocks=1 00:27:12.627 00:27:12.627 ' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:12.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.627 --rc genhtml_branch_coverage=1 00:27:12.627 --rc genhtml_function_coverage=1 00:27:12.627 --rc genhtml_legend=1 00:27:12.627 --rc geninfo_all_blocks=1 00:27:12.627 --rc geninfo_unexecuted_blocks=1 00:27:12.627 00:27:12.627 ' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:12.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.627 --rc genhtml_branch_coverage=1 00:27:12.627 --rc genhtml_function_coverage=1 00:27:12.627 --rc genhtml_legend=1 00:27:12.627 --rc geninfo_all_blocks=1 00:27:12.627 --rc geninfo_unexecuted_blocks=1 00:27:12.627 00:27:12.627 ' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:12.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.627 --rc genhtml_branch_coverage=1 00:27:12.627 --rc genhtml_function_coverage=1 00:27:12.627 --rc genhtml_legend=1 00:27:12.627 --rc geninfo_all_blocks=1 00:27:12.627 --rc geninfo_unexecuted_blocks=1 00:27:12.627 00:27:12.627 ' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:12.627 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81470 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81470 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81470 ']' 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.628 13:40:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:12.628 [2024-11-26 13:40:01.070309] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:27:12.628 [2024-11-26 13:40:01.070554] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81470 ] 00:27:12.884 [2024-11-26 13:40:01.230964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.884 [2024-11-26 13:40:01.329908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:13.447 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:13.448 13:40:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:13.704 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:13.704 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:13.704 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:13.704 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:13.704 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:13.704 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:13.704 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:13.704 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:13.961 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:13.961 { 00:27:13.961 "name": "basen1", 00:27:13.961 "aliases": [ 00:27:13.961 "9c5246a8-694e-4a0f-ba9e-177e6f510531" 00:27:13.961 ], 00:27:13.961 "product_name": "NVMe disk", 00:27:13.961 "block_size": 4096, 00:27:13.961 "num_blocks": 1310720, 00:27:13.961 "uuid": "9c5246a8-694e-4a0f-ba9e-177e6f510531", 00:27:13.961 "numa_id": -1, 00:27:13.961 "assigned_rate_limits": { 00:27:13.961 "rw_ios_per_sec": 0, 00:27:13.961 "rw_mbytes_per_sec": 0, 00:27:13.961 "r_mbytes_per_sec": 0, 00:27:13.961 "w_mbytes_per_sec": 0 00:27:13.961 }, 00:27:13.961 "claimed": true, 00:27:13.961 "claim_type": "read_many_write_one", 00:27:13.961 "zoned": false, 00:27:13.961 "supported_io_types": { 00:27:13.961 "read": true, 00:27:13.961 "write": true, 00:27:13.961 "unmap": true, 00:27:13.961 "flush": true, 00:27:13.961 "reset": true, 00:27:13.961 "nvme_admin": true, 00:27:13.961 "nvme_io": true, 00:27:13.961 "nvme_io_md": false, 00:27:13.961 "write_zeroes": true, 00:27:13.961 "zcopy": false, 00:27:13.961 "get_zone_info": false, 00:27:13.961 "zone_management": false, 00:27:13.962 "zone_append": false, 00:27:13.962 "compare": true, 00:27:13.962 "compare_and_write": false, 00:27:13.962 "abort": true, 00:27:13.962 "seek_hole": false, 00:27:13.962 "seek_data": false, 00:27:13.962 "copy": true, 00:27:13.962 "nvme_iov_md": false 00:27:13.962 }, 00:27:13.962 "driver_specific": { 00:27:13.962 "nvme": [ 00:27:13.962 { 00:27:13.962 "pci_address": "0000:00:11.0", 00:27:13.962 "trid": { 00:27:13.962 "trtype": "PCIe", 00:27:13.962 "traddr": "0000:00:11.0" 00:27:13.962 }, 00:27:13.962 "ctrlr_data": { 00:27:13.962 "cntlid": 0, 00:27:13.962 "vendor_id": "0x1b36", 00:27:13.962 "model_number": "QEMU NVMe Ctrl", 00:27:13.962 "serial_number": "12341", 00:27:13.962 "firmware_revision": "8.0.0", 00:27:13.962 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:13.962 "oacs": { 00:27:13.962 "security": 0, 00:27:13.962 "format": 1, 00:27:13.962 "firmware": 0, 00:27:13.962 "ns_manage": 1 00:27:13.962 }, 00:27:13.962 "multi_ctrlr": false, 00:27:13.962 "ana_reporting": false 00:27:13.962 }, 00:27:13.962 "vs": { 00:27:13.962 "nvme_version": "1.4" 00:27:13.962 }, 00:27:13.962 "ns_data": { 00:27:13.962 "id": 1, 00:27:13.962 "can_share": false 00:27:13.962 } 00:27:13.962 } 00:27:13.962 ], 00:27:13.962 "mp_policy": "active_passive" 00:27:13.962 } 00:27:13.962 } 00:27:13.962 ]' 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:13.962 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:14.219 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0f31e923-2d4c-426a-9d7b-44b13680a22d 00:27:14.219 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:14.219 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f31e923-2d4c-426a-9d7b-44b13680a22d 00:27:14.477 13:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:14.734 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=b9b8c808-d6f4-46f7-950f-9f765f80950c 00:27:14.734 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u b9b8c808-d6f4-46f7-950f-9f765f80950c 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f0da180f-ab39-46c7-88fe-41e4d1884954 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f0da180f-ab39-46c7-88fe-41e4d1884954 ]] 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f0da180f-ab39-46c7-88fe-41e4d1884954 5120 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f0da180f-ab39-46c7-88fe-41e4d1884954 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f0da180f-ab39-46c7-88fe-41e4d1884954 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f0da180f-ab39-46c7-88fe-41e4d1884954 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0da180f-ab39-46c7-88fe-41e4d1884954 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:14.992 { 00:27:14.992 "name": "f0da180f-ab39-46c7-88fe-41e4d1884954", 00:27:14.992 "aliases": [ 00:27:14.992 "lvs/basen1p0" 00:27:14.992 ], 00:27:14.992 "product_name": "Logical Volume", 00:27:14.992 "block_size": 4096, 00:27:14.992 "num_blocks": 5242880, 00:27:14.992 "uuid": "f0da180f-ab39-46c7-88fe-41e4d1884954", 00:27:14.992 "assigned_rate_limits": { 00:27:14.992 "rw_ios_per_sec": 0, 00:27:14.992 "rw_mbytes_per_sec": 0, 00:27:14.992 "r_mbytes_per_sec": 0, 00:27:14.992 "w_mbytes_per_sec": 0 00:27:14.992 }, 00:27:14.992 "claimed": false, 00:27:14.992 "zoned": false, 00:27:14.992 "supported_io_types": { 00:27:14.992 "read": true, 00:27:14.992 "write": true, 00:27:14.992 "unmap": true, 00:27:14.992 "flush": false, 00:27:14.992 "reset": true, 00:27:14.992 "nvme_admin": false, 00:27:14.992 "nvme_io": false, 00:27:14.992 "nvme_io_md": false, 00:27:14.992 "write_zeroes": true, 00:27:14.992 "zcopy": false, 00:27:14.992 "get_zone_info": false, 00:27:14.992 "zone_management": false, 00:27:14.992 "zone_append": false, 00:27:14.992 "compare": false, 00:27:14.992 "compare_and_write": false, 00:27:14.992 "abort": false, 00:27:14.992 "seek_hole": true, 00:27:14.992 "seek_data": true, 00:27:14.992 "copy": false, 00:27:14.992 "nvme_iov_md": false 00:27:14.992 }, 00:27:14.992 "driver_specific": { 00:27:14.992 "lvol": { 00:27:14.992 "lvol_store_uuid": "b9b8c808-d6f4-46f7-950f-9f765f80950c", 00:27:14.992 "base_bdev": "basen1", 00:27:14.992 "thin_provision": true, 00:27:14.992 "num_allocated_clusters": 0, 00:27:14.992 "snapshot": false, 00:27:14.992 "clone": false, 00:27:14.992 "esnap_clone": false 00:27:14.992 } 00:27:14.992 } 00:27:14.992 } 00:27:14.992 ]' 00:27:14.992 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:15.250 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:15.250 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:15.250 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:15.250 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:15.250 13:40:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:15.250 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:15.250 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:15.250 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:15.508 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:15.508 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:15.508 13:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:15.767 13:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:15.767 13:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:15.767 13:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f0da180f-ab39-46c7-88fe-41e4d1884954 -c cachen1p0 --l2p_dram_limit 2 00:27:15.767 [2024-11-26 13:40:04.268696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.767 [2024-11-26 13:40:04.268869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:15.767 [2024-11-26 13:40:04.268890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:15.767 [2024-11-26 13:40:04.268898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.767 [2024-11-26 13:40:04.268952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.767 [2024-11-26 13:40:04.268960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:15.767 [2024-11-26 13:40:04.268969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:15.767 [2024-11-26 13:40:04.268975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.767 [2024-11-26 13:40:04.268992] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:15.767 [2024-11-26 13:40:04.269621] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:15.767 [2024-11-26 13:40:04.269643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.767 [2024-11-26 13:40:04.269649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:15.767 [2024-11-26 13:40:04.269658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.653 ms 00:27:15.767 [2024-11-26 13:40:04.269664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.767 [2024-11-26 13:40:04.269693] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f1485b81-0e2e-45c5-a325-46ffaa20f8e4 00:27:15.767 [2024-11-26 13:40:04.270720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.767 [2024-11-26 13:40:04.270749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:15.767 [2024-11-26 13:40:04.270758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:15.767 [2024-11-26 13:40:04.270765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.767 [2024-11-26 13:40:04.276012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.767 [2024-11-26 13:40:04.276185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:15.767 [2024-11-26 13:40:04.276198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.190 ms 00:27:15.767 [2024-11-26 13:40:04.276206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.767 [2024-11-26 13:40:04.276240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.767 [2024-11-26 13:40:04.276249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:15.767 [2024-11-26 13:40:04.276255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:15.767 [2024-11-26 13:40:04.276264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.767 [2024-11-26 13:40:04.276304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.767 [2024-11-26 13:40:04.276314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:15.767 [2024-11-26 13:40:04.276322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:15.767 [2024-11-26 13:40:04.276332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.767 [2024-11-26 13:40:04.276348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:15.767 [2024-11-26 13:40:04.279316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.767 [2024-11-26 13:40:04.279417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:15.767 [2024-11-26 13:40:04.279432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.971 ms 00:27:15.767 [2024-11-26 13:40:04.279450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.767 [2024-11-26 13:40:04.279476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.767 [2024-11-26 13:40:04.279483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:15.767 [2024-11-26 13:40:04.279491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:15.767 [2024-11-26 13:40:04.279497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.767 [2024-11-26 13:40:04.279518] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:15.767 [2024-11-26 13:40:04.279635] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:15.768 [2024-11-26 13:40:04.279648] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:15.768 [2024-11-26 13:40:04.279656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:15.768 [2024-11-26 13:40:04.279666] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:15.768 [2024-11-26 13:40:04.279673] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:15.768 [2024-11-26 13:40:04.279681] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:15.768 [2024-11-26 13:40:04.279686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:15.768 [2024-11-26 13:40:04.279695] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:15.768 [2024-11-26 13:40:04.279701] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:15.768 [2024-11-26 13:40:04.279708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.768 [2024-11-26 13:40:04.279714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:15.768 [2024-11-26 13:40:04.279721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:27:15.768 [2024-11-26 13:40:04.279726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.768 [2024-11-26 13:40:04.279796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.768 [2024-11-26 13:40:04.279809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:15.768 [2024-11-26 13:40:04.279816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:27:15.768 [2024-11-26 13:40:04.279821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.768 [2024-11-26 13:40:04.279904] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:15.768 [2024-11-26 13:40:04.279912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:15.768 [2024-11-26 13:40:04.279919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:15.768 [2024-11-26 13:40:04.279925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.279932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:15.768 [2024-11-26 13:40:04.279938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.279945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:15.768 [2024-11-26 13:40:04.279950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:15.768 [2024-11-26 13:40:04.279956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:15.768 [2024-11-26 13:40:04.279961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.279967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:15.768 [2024-11-26 13:40:04.279972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:15.768 [2024-11-26 13:40:04.279980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.279985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:15.768 [2024-11-26 13:40:04.279992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:15.768 [2024-11-26 13:40:04.279997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.280005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:15.768 [2024-11-26 13:40:04.280010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:15.768 [2024-11-26 13:40:04.280017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.280023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:15.768 [2024-11-26 13:40:04.280029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:15.768 [2024-11-26 13:40:04.280037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.768 [2024-11-26 13:40:04.280044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:15.768 [2024-11-26 13:40:04.280049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:15.768 [2024-11-26 13:40:04.280055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.768 [2024-11-26 13:40:04.280060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:15.768 [2024-11-26 13:40:04.280066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:15.768 [2024-11-26 13:40:04.280072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.768 [2024-11-26 13:40:04.280078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:15.768 [2024-11-26 13:40:04.280083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:15.768 [2024-11-26 13:40:04.280090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:15.768 [2024-11-26 13:40:04.280094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:15.768 [2024-11-26 13:40:04.280102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:15.768 [2024-11-26 13:40:04.280108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.280114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:15.768 [2024-11-26 13:40:04.280119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:15.768 [2024-11-26 13:40:04.280126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.280131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:15.768 [2024-11-26 13:40:04.280138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:15.768 [2024-11-26 13:40:04.280142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.280149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:15.768 [2024-11-26 13:40:04.280154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:15.768 [2024-11-26 13:40:04.280161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.280166] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:15.768 [2024-11-26 13:40:04.280173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:15.768 [2024-11-26 13:40:04.280179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:15.768 [2024-11-26 13:40:04.280187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:15.768 [2024-11-26 13:40:04.280192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:15.768 [2024-11-26 13:40:04.280201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:15.768 [2024-11-26 13:40:04.280206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:15.768 [2024-11-26 13:40:04.280213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:15.768 [2024-11-26 13:40:04.280218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:15.768 [2024-11-26 13:40:04.280224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:15.768 [2024-11-26 13:40:04.280232] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:15.768 [2024-11-26 13:40:04.280243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:15.768 [2024-11-26 13:40:04.280257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:15.768 [2024-11-26 13:40:04.280276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:15.768 [2024-11-26 13:40:04.280283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:15.768 [2024-11-26 13:40:04.280288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:15.768 [2024-11-26 13:40:04.280295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:15.768 [2024-11-26 13:40:04.280340] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:15.768 [2024-11-26 13:40:04.280347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:15.768 [2024-11-26 13:40:04.280360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:15.768 [2024-11-26 13:40:04.280365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:15.768 [2024-11-26 13:40:04.280372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:15.768 [2024-11-26 13:40:04.280379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.768 [2024-11-26 13:40:04.280385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:15.768 [2024-11-26 13:40:04.280391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:27:15.768 [2024-11-26 13:40:04.280398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.768 [2024-11-26 13:40:04.280438] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:15.768 [2024-11-26 13:40:04.280461] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:18.299 [2024-11-26 13:40:06.556914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.299 [2024-11-26 13:40:06.557119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:18.299 [2024-11-26 13:40:06.557190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2276.465 ms 00:27:18.299 [2024-11-26 13:40:06.557217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.299 [2024-11-26 13:40:06.582714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.299 [2024-11-26 13:40:06.582868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:18.299 [2024-11-26 13:40:06.582927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.238 ms 00:27:18.299 [2024-11-26 13:40:06.582952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.299 [2024-11-26 13:40:06.583039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.299 [2024-11-26 13:40:06.583068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:18.299 [2024-11-26 13:40:06.583134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:18.299 [2024-11-26 13:40:06.583166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.299 [2024-11-26 13:40:06.613832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.299 [2024-11-26 13:40:06.613967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:18.299 [2024-11-26 13:40:06.614026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.600 ms 00:27:18.299 [2024-11-26 13:40:06.614050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.299 [2024-11-26 13:40:06.614098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.299 [2024-11-26 13:40:06.614122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:18.299 [2024-11-26 13:40:06.614142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:18.299 [2024-11-26 13:40:06.614162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.299 [2024-11-26 13:40:06.614530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.299 [2024-11-26 13:40:06.614630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:18.299 [2024-11-26 13:40:06.614692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:27:18.299 [2024-11-26 13:40:06.614717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.299 [2024-11-26 13:40:06.614767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.299 [2024-11-26 13:40:06.614938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:18.299 [2024-11-26 13:40:06.614973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:18.299 [2024-11-26 13:40:06.614995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.299 [2024-11-26 13:40:06.629193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.299 [2024-11-26 13:40:06.629315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:18.299 [2024-11-26 13:40:06.629371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.165 ms 00:27:18.299 [2024-11-26 13:40:06.629396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.299 [2024-11-26 13:40:06.640730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:18.299 [2024-11-26 13:40:06.641657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.641748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:18.300 [2024-11-26 13:40:06.641801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.168 ms 00:27:18.300 [2024-11-26 13:40:06.641823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.673157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.673297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:18.300 [2024-11-26 13:40:06.673363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.296 ms 00:27:18.300 [2024-11-26 13:40:06.673388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.673488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.673692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:18.300 [2024-11-26 13:40:06.673730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:18.300 [2024-11-26 13:40:06.673749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.695844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.695951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:18.300 [2024-11-26 13:40:06.696006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.030 ms 00:27:18.300 [2024-11-26 13:40:06.696029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.718303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.718412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:18.300 [2024-11-26 13:40:06.718483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.224 ms 00:27:18.300 [2024-11-26 13:40:06.718507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.719068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.719144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:18.300 [2024-11-26 13:40:06.719192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.519 ms 00:27:18.300 [2024-11-26 13:40:06.719215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.785684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.785816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:18.300 [2024-11-26 13:40:06.785884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.393 ms 00:27:18.300 [2024-11-26 13:40:06.785943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.810033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.810158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:18.300 [2024-11-26 13:40:06.810219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.012 ms 00:27:18.300 [2024-11-26 13:40:06.810243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.833373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.833512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:18.300 [2024-11-26 13:40:06.833574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.088 ms 00:27:18.300 [2024-11-26 13:40:06.833596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.856402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.856529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:18.300 [2024-11-26 13:40:06.856592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.767 ms 00:27:18.300 [2024-11-26 13:40:06.856615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.856654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.856760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:18.300 [2024-11-26 13:40:06.856774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:18.300 [2024-11-26 13:40:06.856784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.856862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.300 [2024-11-26 13:40:06.856875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:18.300 [2024-11-26 13:40:06.856885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:18.300 [2024-11-26 13:40:06.856892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.300 [2024-11-26 13:40:06.857807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2588.678 ms, result 0 00:27:18.300 { 00:27:18.300 "name": "ftl", 00:27:18.300 "uuid": "f1485b81-0e2e-45c5-a325-46ffaa20f8e4" 00:27:18.300 } 00:27:18.559 13:40:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:18.559 [2024-11-26 13:40:07.065081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.559 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:18.818 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:19.075 [2024-11-26 13:40:07.461472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:19.075 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:19.332 [2024-11-26 13:40:07.653840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:19.332 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:19.590 Fill FTL, iteration 1 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:19.590 13:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:19.590 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:19.590 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81581 00:27:19.591 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:19.591 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:19.591 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81581 /var/tmp/spdk.tgt.sock 00:27:19.591 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81581 ']' 00:27:19.591 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:19.591 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.591 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:19.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:19.591 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.591 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:19.591 [2024-11-26 13:40:08.077557] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:27:19.591 [2024-11-26 13:40:08.077912] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81581 ] 00:27:19.849 [2024-11-26 13:40:08.231484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.849 [2024-11-26 13:40:08.330277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.417 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.417 13:40:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:20.417 13:40:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:20.674 ftln1 00:27:20.674 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:20.674 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81581 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81581 ']' 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81581 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81581 00:27:20.931 killing process with pid 81581 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81581' 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81581 00:27:20.931 13:40:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81581 00:27:22.831 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:22.831 13:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:22.831 [2024-11-26 13:40:10.956936] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:27:22.831 [2024-11-26 13:40:10.957051] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81624 ] 00:27:22.831 [2024-11-26 13:40:11.117566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.831 [2024-11-26 13:40:11.217553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.205  [2024-11-26T13:40:13.708Z] Copying: 232/1024 [MB] (232 MBps) [2024-11-26T13:40:14.640Z] Copying: 494/1024 [MB] (262 MBps) [2024-11-26T13:40:15.573Z] Copying: 755/1024 [MB] (261 MBps) [2024-11-26T13:40:15.830Z] Copying: 1016/1024 [MB] (261 MBps) [2024-11-26T13:40:16.395Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:27:27.825 00:27:27.825 Calculate MD5 checksum, iteration 1 00:27:27.825 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:27.825 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:27.825 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:27.825 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:27.825 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:27.825 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:27.825 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:27.825 13:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:27.825 [2024-11-26 13:40:16.250728] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:27:27.825 [2024-11-26 13:40:16.250821] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81679 ] 00:27:28.083 [2024-11-26 13:40:16.405822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.083 [2024-11-26 13:40:16.501885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.453  [2024-11-26T13:40:18.589Z] Copying: 675/1024 [MB] (675 MBps) [2024-11-26T13:40:19.154Z] Copying: 1024/1024 [MB] (average 686 MBps) 00:27:30.584 00:27:30.584 13:40:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:30.584 13:40:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:32.480 Fill FTL, iteration 2 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7726a6741bea2403c938f2e0ae7b4db3 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:32.480 13:40:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:32.737 [2024-11-26 13:40:21.048437] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:27:32.737 [2024-11-26 13:40:21.048882] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81734 ] 00:27:32.737 [2024-11-26 13:40:21.205091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.737 [2024-11-26 13:40:21.303263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.115  [2024-11-26T13:40:24.065Z] Copying: 215/1024 [MB] (215 MBps) [2024-11-26T13:40:25.004Z] Copying: 434/1024 [MB] (219 MBps) [2024-11-26T13:40:25.945Z] Copying: 694/1024 [MB] (260 MBps) [2024-11-26T13:40:25.945Z] Copying: 960/1024 [MB] (266 MBps) [2024-11-26T13:40:26.510Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:27:37.940 00:27:37.940 Calculate MD5 checksum, iteration 2 00:27:37.940 13:40:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:37.940 13:40:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:37.940 13:40:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:37.940 13:40:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:37.940 13:40:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:37.940 13:40:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:37.940 13:40:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:37.940 13:40:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:38.199 [2024-11-26 13:40:26.535926] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:27:38.199 [2024-11-26 13:40:26.536036] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81794 ] 00:27:38.199 [2024-11-26 13:40:26.689998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.459 [2024-11-26 13:40:26.772147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.843  [2024-11-26T13:40:28.734Z] Copying: 692/1024 [MB] (692 MBps) [2024-11-26T13:40:29.782Z] Copying: 1024/1024 [MB] (average 691 MBps) 00:27:41.213 00:27:41.213 13:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:41.213 13:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:43.756 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:43.756 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1fe20e1abaa0c5c3db4dd62f65856d42 00:27:43.756 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:43.756 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:43.756 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:43.756 [2024-11-26 13:40:31.903093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.756 [2024-11-26 13:40:31.903147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:43.756 true 00:27:43.756 [2024-11-26 13:40:31.903159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:43.756 [2024-11-26 13:40:31.903166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.756 [2024-11-26 13:40:31.903185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.756 [2024-11-26 13:40:31.903192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:43.756 [2024-11-26 13:40:31.903201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:43.756 [2024-11-26 13:40:31.903207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.756 [2024-11-26 13:40:31.903223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.756 [2024-11-26 13:40:31.903230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:43.756 [2024-11-26 13:40:31.903237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:43.756 [2024-11-26 13:40:31.903242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.756 [2024-11-26 13:40:31.903290] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.189 ms, result 0 00:27:43.756 13:40:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:43.756 { 00:27:43.756 "name": "ftl", 00:27:43.756 "properties": [ 00:27:43.756 { 00:27:43.756 "name": "superblock_version", 00:27:43.756 "value": 5, 00:27:43.756 "read-only": true 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "name": "base_device", 00:27:43.756 "bands": [ 00:27:43.756 { 00:27:43.756 "id": 0, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 1, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 2, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 3, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 4, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 5, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 6, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 7, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 8, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 9, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 10, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 11, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 12, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 13, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 14, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 15, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 16, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 17, 00:27:43.756 "state": "FREE", 00:27:43.756 "validity": 0.0 00:27:43.756 } 00:27:43.756 ], 00:27:43.756 "read-only": true 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "name": "cache_device", 00:27:43.756 "type": "bdev", 00:27:43.756 "chunks": [ 00:27:43.756 { 00:27:43.756 "id": 0, 00:27:43.756 "state": "INACTIVE", 00:27:43.756 "utilization": 0.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 1, 00:27:43.756 "state": "CLOSED", 00:27:43.756 "utilization": 1.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 2, 00:27:43.756 "state": "CLOSED", 00:27:43.756 "utilization": 1.0 00:27:43.756 }, 00:27:43.756 { 00:27:43.756 "id": 3, 00:27:43.756 "state": "OPEN", 00:27:43.756 "utilization": 0.001953125 00:27:43.757 }, 00:27:43.757 { 00:27:43.757 "id": 4, 00:27:43.757 "state": "OPEN", 00:27:43.757 "utilization": 0.0 00:27:43.757 } 00:27:43.757 ], 00:27:43.757 "read-only": true 00:27:43.757 }, 00:27:43.757 { 00:27:43.757 "name": "verbose_mode", 00:27:43.757 "value": true, 00:27:43.757 "unit": "", 00:27:43.757 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:43.757 }, 00:27:43.757 { 00:27:43.757 "name": "prep_upgrade_on_shutdown", 00:27:43.757 "value": false, 00:27:43.757 "unit": "", 00:27:43.757 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:43.757 } 00:27:43.757 ] 00:27:43.757 } 00:27:43.757 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:43.757 [2024-11-26 13:40:32.311410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.757 [2024-11-26 13:40:32.311450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:43.757 [2024-11-26 13:40:32.311461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:43.757 [2024-11-26 13:40:32.311467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.757 [2024-11-26 13:40:32.311485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.757 [2024-11-26 13:40:32.311491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:43.757 [2024-11-26 13:40:32.311497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:43.757 [2024-11-26 13:40:32.311503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.757 [2024-11-26 13:40:32.311518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.757 [2024-11-26 13:40:32.311524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:43.757 [2024-11-26 13:40:32.311530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:43.757 [2024-11-26 13:40:32.311535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.757 [2024-11-26 13:40:32.311580] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.160 ms, result 0 00:27:43.757 true 00:27:44.016 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:44.016 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:44.016 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:44.016 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:44.016 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:44.016 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:44.277 [2024-11-26 13:40:32.711707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.277 [2024-11-26 13:40:32.711745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:44.277 [2024-11-26 13:40:32.711754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:44.277 [2024-11-26 13:40:32.711760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.277 [2024-11-26 13:40:32.711776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.277 [2024-11-26 13:40:32.711783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:44.277 [2024-11-26 13:40:32.711789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:44.277 [2024-11-26 13:40:32.711795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.277 [2024-11-26 13:40:32.711810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.277 [2024-11-26 13:40:32.711816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:44.277 [2024-11-26 13:40:32.711822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:44.277 [2024-11-26 13:40:32.711827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.277 [2024-11-26 13:40:32.711872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.155 ms, result 0 00:27:44.277 true 00:27:44.277 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:44.537 { 00:27:44.537 "name": "ftl", 00:27:44.537 "properties": [ 00:27:44.537 { 00:27:44.537 "name": "superblock_version", 00:27:44.537 "value": 5, 00:27:44.537 "read-only": true 00:27:44.537 }, 00:27:44.537 { 00:27:44.537 "name": "base_device", 00:27:44.537 "bands": [ 00:27:44.537 { 00:27:44.537 "id": 0, 00:27:44.537 "state": "FREE", 00:27:44.537 "validity": 0.0 00:27:44.537 }, 00:27:44.537 { 00:27:44.537 "id": 1, 00:27:44.537 "state": "FREE", 00:27:44.537 "validity": 0.0 00:27:44.537 }, 00:27:44.537 { 00:27:44.537 "id": 2, 00:27:44.537 "state": "FREE", 00:27:44.537 "validity": 0.0 00:27:44.537 }, 00:27:44.537 { 00:27:44.537 "id": 3, 00:27:44.537 "state": "FREE", 00:27:44.537 "validity": 0.0 00:27:44.537 }, 00:27:44.537 { 00:27:44.537 "id": 4, 00:27:44.537 "state": "FREE", 00:27:44.537 "validity": 0.0 00:27:44.537 }, 00:27:44.537 { 00:27:44.537 "id": 5, 00:27:44.537 "state": "FREE", 00:27:44.537 "validity": 0.0 00:27:44.537 }, 00:27:44.537 { 00:27:44.537 "id": 6, 00:27:44.537 "state": "FREE", 00:27:44.537 "validity": 0.0 00:27:44.537 }, 00:27:44.537 { 00:27:44.537 "id": 7, 00:27:44.537 "state": "FREE", 00:27:44.537 "validity": 0.0 00:27:44.537 }, 00:27:44.537 { 00:27:44.538 "id": 8, 00:27:44.538 "state": "FREE", 00:27:44.538 "validity": 0.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 9, 00:27:44.538 "state": "FREE", 00:27:44.538 "validity": 0.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 10, 00:27:44.538 "state": "FREE", 00:27:44.538 "validity": 0.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 11, 00:27:44.538 "state": "FREE", 00:27:44.538 "validity": 0.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 12, 00:27:44.538 "state": "FREE", 00:27:44.538 "validity": 0.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 13, 00:27:44.538 "state": "FREE", 00:27:44.538 "validity": 0.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 14, 00:27:44.538 "state": "FREE", 00:27:44.538 "validity": 0.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 15, 00:27:44.538 "state": "FREE", 00:27:44.538 "validity": 0.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 16, 00:27:44.538 "state": "FREE", 00:27:44.538 "validity": 0.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 17, 00:27:44.538 "state": "FREE", 00:27:44.538 "validity": 0.0 00:27:44.538 } 00:27:44.538 ], 00:27:44.538 "read-only": true 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "name": "cache_device", 00:27:44.538 "type": "bdev", 00:27:44.538 "chunks": [ 00:27:44.538 { 00:27:44.538 "id": 0, 00:27:44.538 "state": "INACTIVE", 00:27:44.538 "utilization": 0.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 1, 00:27:44.538 "state": "CLOSED", 00:27:44.538 "utilization": 1.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 2, 00:27:44.538 "state": "CLOSED", 00:27:44.538 "utilization": 1.0 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 3, 00:27:44.538 "state": "OPEN", 00:27:44.538 "utilization": 0.001953125 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "id": 4, 00:27:44.538 "state": "OPEN", 00:27:44.538 "utilization": 0.0 00:27:44.538 } 00:27:44.538 ], 00:27:44.538 "read-only": true 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "name": "verbose_mode", 00:27:44.538 "value": true, 00:27:44.538 "unit": "", 00:27:44.538 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:44.538 }, 00:27:44.538 { 00:27:44.538 "name": "prep_upgrade_on_shutdown", 00:27:44.538 "value": true, 00:27:44.538 "unit": "", 00:27:44.538 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:44.538 } 00:27:44.538 ] 00:27:44.538 } 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81470 ]] 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81470 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81470 ']' 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81470 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81470 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:44.538 killing process with pid 81470 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81470' 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81470 00:27:44.538 13:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81470 00:27:45.104 [2024-11-26 13:40:33.513290] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:45.104 [2024-11-26 13:40:33.525771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.104 [2024-11-26 13:40:33.525809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:45.104 [2024-11-26 13:40:33.525819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:45.104 [2024-11-26 13:40:33.525826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.104 [2024-11-26 13:40:33.525843] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:45.104 [2024-11-26 13:40:33.528000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.104 [2024-11-26 13:40:33.528028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:45.104 [2024-11-26 13:40:33.528036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.146 ms 00:27:45.104 [2024-11-26 13:40:33.528044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.831549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.831612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:51.667 [2024-11-26 13:40:39.831625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6303.458 ms 00:27:51.667 [2024-11-26 13:40:39.831636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.832534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.832555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:51.667 [2024-11-26 13:40:39.832563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.885 ms 00:27:51.667 [2024-11-26 13:40:39.832569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.833457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.833485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:51.667 [2024-11-26 13:40:39.833493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.868 ms 00:27:51.667 [2024-11-26 13:40:39.833500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.841109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.841139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:51.667 [2024-11-26 13:40:39.841147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.566 ms 00:27:51.667 [2024-11-26 13:40:39.841153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.846181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.846211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:51.667 [2024-11-26 13:40:39.846220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.002 ms 00:27:51.667 [2024-11-26 13:40:39.846227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.846283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.846291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:51.667 [2024-11-26 13:40:39.846302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:51.667 [2024-11-26 13:40:39.846307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.853381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.853409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:51.667 [2024-11-26 13:40:39.853417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.061 ms 00:27:51.667 [2024-11-26 13:40:39.853423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.860503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.860531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:51.667 [2024-11-26 13:40:39.860538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.047 ms 00:27:51.667 [2024-11-26 13:40:39.860544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.867768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.867795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:51.667 [2024-11-26 13:40:39.867802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.198 ms 00:27:51.667 [2024-11-26 13:40:39.867808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.874780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.874807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:51.667 [2024-11-26 13:40:39.874814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.924 ms 00:27:51.667 [2024-11-26 13:40:39.874819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.874844] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:51.667 [2024-11-26 13:40:39.874862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:51.667 [2024-11-26 13:40:39.874870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:51.667 [2024-11-26 13:40:39.874876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:51.667 [2024-11-26 13:40:39.874883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:51.667 [2024-11-26 13:40:39.874973] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:51.667 [2024-11-26 13:40:39.874979] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f1485b81-0e2e-45c5-a325-46ffaa20f8e4 00:27:51.667 [2024-11-26 13:40:39.874986] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:51.667 [2024-11-26 13:40:39.874992] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:51.667 [2024-11-26 13:40:39.874997] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:51.667 [2024-11-26 13:40:39.875003] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:51.667 [2024-11-26 13:40:39.875009] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:51.667 [2024-11-26 13:40:39.875017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:51.667 [2024-11-26 13:40:39.875022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:51.667 [2024-11-26 13:40:39.875028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:51.667 [2024-11-26 13:40:39.875033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:51.667 [2024-11-26 13:40:39.875039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.875047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:51.667 [2024-11-26 13:40:39.875054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:27:51.667 [2024-11-26 13:40:39.875059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.884828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.884855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:51.667 [2024-11-26 13:40:39.884863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.756 ms 00:27:51.667 [2024-11-26 13:40:39.884873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.885142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.667 [2024-11-26 13:40:39.885154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:51.667 [2024-11-26 13:40:39.885161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:27:51.667 [2024-11-26 13:40:39.885166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.917794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.667 [2024-11-26 13:40:39.917824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:51.667 [2024-11-26 13:40:39.917833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.667 [2024-11-26 13:40:39.917842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.917865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.667 [2024-11-26 13:40:39.917872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:51.667 [2024-11-26 13:40:39.917878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.667 [2024-11-26 13:40:39.917884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.917937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.667 [2024-11-26 13:40:39.917945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:51.667 [2024-11-26 13:40:39.917951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.667 [2024-11-26 13:40:39.917958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.667 [2024-11-26 13:40:39.917973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.667 [2024-11-26 13:40:39.917979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:51.667 [2024-11-26 13:40:39.917985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.668 [2024-11-26 13:40:39.917991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.668 [2024-11-26 13:40:39.978650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.668 [2024-11-26 13:40:39.978689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:51.668 [2024-11-26 13:40:39.978698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.668 [2024-11-26 13:40:39.978707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.668 [2024-11-26 13:40:40.028470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.668 [2024-11-26 13:40:40.028513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:51.668 [2024-11-26 13:40:40.028522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.668 [2024-11-26 13:40:40.028528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.668 [2024-11-26 13:40:40.028603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.668 [2024-11-26 13:40:40.028611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:51.668 [2024-11-26 13:40:40.028618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.668 [2024-11-26 13:40:40.028624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.668 [2024-11-26 13:40:40.028657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.668 [2024-11-26 13:40:40.028667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:51.668 [2024-11-26 13:40:40.028674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.668 [2024-11-26 13:40:40.028680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.668 [2024-11-26 13:40:40.028751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.668 [2024-11-26 13:40:40.028759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:51.668 [2024-11-26 13:40:40.028765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.668 [2024-11-26 13:40:40.028771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.668 [2024-11-26 13:40:40.028794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.668 [2024-11-26 13:40:40.028803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:51.668 [2024-11-26 13:40:40.028809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.668 [2024-11-26 13:40:40.028816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.668 [2024-11-26 13:40:40.028845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.668 [2024-11-26 13:40:40.028851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:51.668 [2024-11-26 13:40:40.028857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.668 [2024-11-26 13:40:40.028863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.668 [2024-11-26 13:40:40.028897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:51.668 [2024-11-26 13:40:40.028907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:51.668 [2024-11-26 13:40:40.028913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:51.668 [2024-11-26 13:40:40.028919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.668 [2024-11-26 13:40:40.029011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 6503.201 ms, result 0 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81966 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81966 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81966 ']' 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.245 13:40:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:58.245 [2024-11-26 13:40:46.688176] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:27:58.245 [2024-11-26 13:40:46.688295] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81966 ] 00:27:58.506 [2024-11-26 13:40:46.844693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.506 [2024-11-26 13:40:46.923891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.105 [2024-11-26 13:40:47.490480] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:59.105 [2024-11-26 13:40:47.490535] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:59.105 [2024-11-26 13:40:47.633789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.105 [2024-11-26 13:40:47.633847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:59.105 [2024-11-26 13:40:47.633864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:59.105 [2024-11-26 13:40:47.633872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.105 [2024-11-26 13:40:47.633921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.105 [2024-11-26 13:40:47.633931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:59.105 [2024-11-26 13:40:47.633940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:59.105 [2024-11-26 13:40:47.633947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.105 [2024-11-26 13:40:47.633971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:59.105 [2024-11-26 13:40:47.634625] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:59.105 [2024-11-26 13:40:47.634648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.105 [2024-11-26 13:40:47.634656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:59.105 [2024-11-26 13:40:47.634664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.685 ms 00:27:59.105 [2024-11-26 13:40:47.634671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.105 [2024-11-26 13:40:47.635743] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:59.105 [2024-11-26 13:40:47.647888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.105 [2024-11-26 13:40:47.647928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:59.105 [2024-11-26 13:40:47.647939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.147 ms 00:27:59.105 [2024-11-26 13:40:47.647946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.105 [2024-11-26 13:40:47.647997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.105 [2024-11-26 13:40:47.648007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:59.105 [2024-11-26 13:40:47.648015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:59.105 [2024-11-26 13:40:47.648021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.105 [2024-11-26 13:40:47.652737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.105 [2024-11-26 13:40:47.652768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:59.105 [2024-11-26 13:40:47.652777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.655 ms 00:27:59.105 [2024-11-26 13:40:47.652784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.105 [2024-11-26 13:40:47.652839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.105 [2024-11-26 13:40:47.652848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:59.105 [2024-11-26 13:40:47.652856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:59.105 [2024-11-26 13:40:47.652863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.105 [2024-11-26 13:40:47.652904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.106 [2024-11-26 13:40:47.652916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:59.106 [2024-11-26 13:40:47.652924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:59.106 [2024-11-26 13:40:47.652930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.106 [2024-11-26 13:40:47.652951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:59.106 [2024-11-26 13:40:47.656386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.106 [2024-11-26 13:40:47.656427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:59.106 [2024-11-26 13:40:47.656439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.440 ms 00:27:59.106 [2024-11-26 13:40:47.656457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.106 [2024-11-26 13:40:47.656484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.106 [2024-11-26 13:40:47.656493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:59.106 [2024-11-26 13:40:47.656500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:59.106 [2024-11-26 13:40:47.656507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.106 [2024-11-26 13:40:47.656529] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:59.106 [2024-11-26 13:40:47.656548] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:59.106 [2024-11-26 13:40:47.656581] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:59.106 [2024-11-26 13:40:47.656595] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:59.106 [2024-11-26 13:40:47.656695] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:59.106 [2024-11-26 13:40:47.656712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:59.106 [2024-11-26 13:40:47.656723] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:59.106 [2024-11-26 13:40:47.656733] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:59.106 [2024-11-26 13:40:47.656744] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:59.106 [2024-11-26 13:40:47.656752] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:59.106 [2024-11-26 13:40:47.656759] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:59.106 [2024-11-26 13:40:47.656767] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:59.106 [2024-11-26 13:40:47.656774] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:59.106 [2024-11-26 13:40:47.656781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.106 [2024-11-26 13:40:47.656787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:59.106 [2024-11-26 13:40:47.656795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:27:59.106 [2024-11-26 13:40:47.656802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.106 [2024-11-26 13:40:47.656885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.106 [2024-11-26 13:40:47.656893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:59.106 [2024-11-26 13:40:47.656902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:59.106 [2024-11-26 13:40:47.656908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.106 [2024-11-26 13:40:47.657020] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:59.106 [2024-11-26 13:40:47.657036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:59.106 [2024-11-26 13:40:47.657045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:59.106 [2024-11-26 13:40:47.657052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:59.106 [2024-11-26 13:40:47.657066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:59.106 [2024-11-26 13:40:47.657080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:59.106 [2024-11-26 13:40:47.657087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:59.106 [2024-11-26 13:40:47.657093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:59.106 [2024-11-26 13:40:47.657106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:59.106 [2024-11-26 13:40:47.657112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:59.106 [2024-11-26 13:40:47.657124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:59.106 [2024-11-26 13:40:47.657130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:59.106 [2024-11-26 13:40:47.657145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:59.106 [2024-11-26 13:40:47.657152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:59.106 [2024-11-26 13:40:47.657164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:59.106 [2024-11-26 13:40:47.657171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.106 [2024-11-26 13:40:47.657177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:59.106 [2024-11-26 13:40:47.657189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:59.106 [2024-11-26 13:40:47.657196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.106 [2024-11-26 13:40:47.657203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:59.106 [2024-11-26 13:40:47.657209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:59.106 [2024-11-26 13:40:47.657215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.106 [2024-11-26 13:40:47.657221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:59.106 [2024-11-26 13:40:47.657227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:59.106 [2024-11-26 13:40:47.657233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:59.106 [2024-11-26 13:40:47.657239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:59.106 [2024-11-26 13:40:47.657245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:59.106 [2024-11-26 13:40:47.657251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:59.106 [2024-11-26 13:40:47.657264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:59.106 [2024-11-26 13:40:47.657270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:59.106 [2024-11-26 13:40:47.657283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:59.106 [2024-11-26 13:40:47.657301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:59.106 [2024-11-26 13:40:47.657308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657314] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:59.106 [2024-11-26 13:40:47.657321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:59.106 [2024-11-26 13:40:47.657328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:59.106 [2024-11-26 13:40:47.657336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:59.106 [2024-11-26 13:40:47.657343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:59.106 [2024-11-26 13:40:47.657350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:59.106 [2024-11-26 13:40:47.657357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:59.106 [2024-11-26 13:40:47.657364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:59.106 [2024-11-26 13:40:47.657370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:59.106 [2024-11-26 13:40:47.657376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:59.106 [2024-11-26 13:40:47.657384] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:59.106 [2024-11-26 13:40:47.657393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:59.106 [2024-11-26 13:40:47.657400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:59.106 [2024-11-26 13:40:47.657407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:59.106 [2024-11-26 13:40:47.657414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:59.106 [2024-11-26 13:40:47.657421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:59.107 [2024-11-26 13:40:47.657427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:59.107 [2024-11-26 13:40:47.657434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:59.107 [2024-11-26 13:40:47.657457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:59.107 [2024-11-26 13:40:47.657465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:59.107 [2024-11-26 13:40:47.657472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:59.107 [2024-11-26 13:40:47.657479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:59.107 [2024-11-26 13:40:47.657485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:59.107 [2024-11-26 13:40:47.657493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:59.107 [2024-11-26 13:40:47.657499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:59.107 [2024-11-26 13:40:47.657507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:59.107 [2024-11-26 13:40:47.657513] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:59.107 [2024-11-26 13:40:47.657521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:59.107 [2024-11-26 13:40:47.657529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:59.107 [2024-11-26 13:40:47.657536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:59.107 [2024-11-26 13:40:47.657543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:59.107 [2024-11-26 13:40:47.657550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:59.107 [2024-11-26 13:40:47.657558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.107 [2024-11-26 13:40:47.657564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:59.107 [2024-11-26 13:40:47.657571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.606 ms 00:27:59.107 [2024-11-26 13:40:47.657578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.107 [2024-11-26 13:40:47.657617] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:59.107 [2024-11-26 13:40:47.657629] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:01.638 [2024-11-26 13:40:49.962193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.638 [2024-11-26 13:40:49.962256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:01.638 [2024-11-26 13:40:49.962271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2304.567 ms 00:28:01.638 [2024-11-26 13:40:49.962280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.638 [2024-11-26 13:40:49.987464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.638 [2024-11-26 13:40:49.987510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:01.638 [2024-11-26 13:40:49.987523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.967 ms 00:28:01.638 [2024-11-26 13:40:49.987532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.638 [2024-11-26 13:40:49.987625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.638 [2024-11-26 13:40:49.987638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:01.638 [2024-11-26 13:40:49.987646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:01.638 [2024-11-26 13:40:49.987653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.638 [2024-11-26 13:40:50.017803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.638 [2024-11-26 13:40:50.017847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:01.638 [2024-11-26 13:40:50.017860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.111 ms 00:28:01.639 [2024-11-26 13:40:50.017868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.017904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.017912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:01.639 [2024-11-26 13:40:50.017920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:01.639 [2024-11-26 13:40:50.017927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.018278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.018304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:01.639 [2024-11-26 13:40:50.018313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:28:01.639 [2024-11-26 13:40:50.018324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.018366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.018374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:01.639 [2024-11-26 13:40:50.018381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:01.639 [2024-11-26 13:40:50.018389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.033123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.033154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:01.639 [2024-11-26 13:40:50.033164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.712 ms 00:28:01.639 [2024-11-26 13:40:50.033172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.045600] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:01.639 [2024-11-26 13:40:50.045636] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:01.639 [2024-11-26 13:40:50.045647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.045656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:01.639 [2024-11-26 13:40:50.045664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.358 ms 00:28:01.639 [2024-11-26 13:40:50.045672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.059139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.059178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:01.639 [2024-11-26 13:40:50.059191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.382 ms 00:28:01.639 [2024-11-26 13:40:50.059198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.070548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.070587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:01.639 [2024-11-26 13:40:50.070597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.316 ms 00:28:01.639 [2024-11-26 13:40:50.070604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.082097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.082129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:01.639 [2024-11-26 13:40:50.082139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.450 ms 00:28:01.639 [2024-11-26 13:40:50.082147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.082783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.082808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:01.639 [2024-11-26 13:40:50.082818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:28:01.639 [2024-11-26 13:40:50.082825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.151533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.151614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:01.639 [2024-11-26 13:40:50.151629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 68.685 ms 00:28:01.639 [2024-11-26 13:40:50.151637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.162330] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:01.639 [2024-11-26 13:40:50.163120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.163150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:01.639 [2024-11-26 13:40:50.163161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.424 ms 00:28:01.639 [2024-11-26 13:40:50.163168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.163264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.163277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:01.639 [2024-11-26 13:40:50.163286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:01.639 [2024-11-26 13:40:50.163293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.163347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.163357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:01.639 [2024-11-26 13:40:50.163365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:01.639 [2024-11-26 13:40:50.163372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.163392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.163400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:01.639 [2024-11-26 13:40:50.163411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:01.639 [2024-11-26 13:40:50.163418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.163468] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:01.639 [2024-11-26 13:40:50.163478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.163486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:01.639 [2024-11-26 13:40:50.163494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:01.639 [2024-11-26 13:40:50.163501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.186161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.186201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:01.639 [2024-11-26 13:40:50.186213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.641 ms 00:28:01.639 [2024-11-26 13:40:50.186221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.186295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.639 [2024-11-26 13:40:50.186304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:01.639 [2024-11-26 13:40:50.186313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:28:01.639 [2024-11-26 13:40:50.186320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.639 [2024-11-26 13:40:50.187246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2553.036 ms, result 0 00:28:01.639 [2024-11-26 13:40:50.202516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.898 [2024-11-26 13:40:50.218517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:01.898 [2024-11-26 13:40:50.226637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:02.468 13:40:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.468 13:40:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:02.468 13:40:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:02.468 13:40:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:02.468 13:40:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:02.729 [2024-11-26 13:40:51.139581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.729 [2024-11-26 13:40:51.139665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:02.729 [2024-11-26 13:40:51.139682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:02.729 [2024-11-26 13:40:51.139697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.729 [2024-11-26 13:40:51.139727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.729 [2024-11-26 13:40:51.139737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:02.729 [2024-11-26 13:40:51.139747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:02.729 [2024-11-26 13:40:51.139755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.729 [2024-11-26 13:40:51.139777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.729 [2024-11-26 13:40:51.139787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:02.729 [2024-11-26 13:40:51.139796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:02.729 [2024-11-26 13:40:51.139805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.729 [2024-11-26 13:40:51.139875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.288 ms, result 0 00:28:02.729 true 00:28:02.729 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:02.991 { 00:28:02.991 "name": "ftl", 00:28:02.991 "properties": [ 00:28:02.991 { 00:28:02.991 "name": "superblock_version", 00:28:02.991 "value": 5, 00:28:02.991 "read-only": true 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "name": "base_device", 00:28:02.991 "bands": [ 00:28:02.991 { 00:28:02.991 "id": 0, 00:28:02.991 "state": "CLOSED", 00:28:02.991 "validity": 1.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 1, 00:28:02.991 "state": "CLOSED", 00:28:02.991 "validity": 1.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 2, 00:28:02.991 "state": "CLOSED", 00:28:02.991 "validity": 0.007843137254901933 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 3, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 4, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 5, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 6, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 7, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 8, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 9, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 10, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 11, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 12, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 13, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 14, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 15, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 16, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 17, 00:28:02.991 "state": "FREE", 00:28:02.991 "validity": 0.0 00:28:02.991 } 00:28:02.991 ], 00:28:02.991 "read-only": true 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "name": "cache_device", 00:28:02.991 "type": "bdev", 00:28:02.991 "chunks": [ 00:28:02.991 { 00:28:02.991 "id": 0, 00:28:02.991 "state": "INACTIVE", 00:28:02.991 "utilization": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 1, 00:28:02.991 "state": "OPEN", 00:28:02.991 "utilization": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 2, 00:28:02.991 "state": "OPEN", 00:28:02.991 "utilization": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 3, 00:28:02.991 "state": "FREE", 00:28:02.991 "utilization": 0.0 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "id": 4, 00:28:02.991 "state": "FREE", 00:28:02.991 "utilization": 0.0 00:28:02.991 } 00:28:02.991 ], 00:28:02.991 "read-only": true 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "name": "verbose_mode", 00:28:02.991 "value": true, 00:28:02.991 "unit": "", 00:28:02.991 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:02.991 }, 00:28:02.991 { 00:28:02.991 "name": "prep_upgrade_on_shutdown", 00:28:02.991 "value": false, 00:28:02.991 "unit": "", 00:28:02.991 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:02.991 } 00:28:02.991 ] 00:28:02.991 } 00:28:02.991 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:02.991 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:02.991 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:03.252 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:03.252 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:03.252 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:03.252 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:03.252 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:03.514 Validate MD5 checksum, iteration 1 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:03.514 13:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:03.514 [2024-11-26 13:40:51.909386] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:28:03.514 [2024-11-26 13:40:51.909552] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82035 ] 00:28:03.514 [2024-11-26 13:40:52.073372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.776 [2024-11-26 13:40:52.191090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.161  [2024-11-26T13:40:55.112Z] Copying: 535/1024 [MB] (535 MBps) [2024-11-26T13:40:55.112Z] Copying: 910/1024 [MB] (375 MBps) [2024-11-26T13:40:56.045Z] Copying: 1024/1024 [MB] (average 443 MBps) 00:28:07.475 00:28:07.475 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:07.475 13:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7726a6741bea2403c938f2e0ae7b4db3 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7726a6741bea2403c938f2e0ae7b4db3 != \7\7\2\6\a\6\7\4\1\b\e\a\2\4\0\3\c\9\3\8\f\2\e\0\a\e\7\b\4\d\b\3 ]] 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:09.377 Validate MD5 checksum, iteration 2 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:09.377 13:40:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:09.635 [2024-11-26 13:40:57.959055] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:28:09.635 [2024-11-26 13:40:57.959164] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82106 ] 00:28:09.635 [2024-11-26 13:40:58.116746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.892 [2024-11-26 13:40:58.214560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.263  [2024-11-26T13:41:00.880Z] Copying: 529/1024 [MB] (529 MBps) [2024-11-26T13:41:01.176Z] Copying: 939/1024 [MB] (410 MBps) [2024-11-26T13:41:06.440Z] Copying: 1024/1024 [MB] (average 462 MBps) 00:28:17.870 00:28:17.870 13:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:17.870 13:41:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1fe20e1abaa0c5c3db4dd62f65856d42 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1fe20e1abaa0c5c3db4dd62f65856d42 != \1\f\e\2\0\e\1\a\b\a\a\0\c\5\c\3\d\b\4\d\d\6\2\f\6\5\8\5\6\d\4\2 ]] 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81966 ]] 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81966 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82213 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82213 00:28:19.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82213 ']' 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.771 13:41:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:19.771 [2024-11-26 13:41:07.922946] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:28:19.771 [2024-11-26 13:41:07.923229] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82213 ] 00:28:19.771 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 81966 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:19.771 [2024-11-26 13:41:08.081106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.771 [2024-11-26 13:41:08.159089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.340 [2024-11-26 13:41:08.735360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:20.340 [2024-11-26 13:41:08.735573] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:20.340 [2024-11-26 13:41:08.878258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.340 [2024-11-26 13:41:08.878394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:20.340 [2024-11-26 13:41:08.878475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:20.340 [2024-11-26 13:41:08.878502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.340 [2024-11-26 13:41:08.878559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.340 [2024-11-26 13:41:08.878579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:20.340 [2024-11-26 13:41:08.878587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:20.340 [2024-11-26 13:41:08.878593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.340 [2024-11-26 13:41:08.878614] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:20.340 [2024-11-26 13:41:08.879166] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:20.340 [2024-11-26 13:41:08.879185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.340 [2024-11-26 13:41:08.879192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:20.340 [2024-11-26 13:41:08.879199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.579 ms 00:28:20.340 [2024-11-26 13:41:08.879205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.340 [2024-11-26 13:41:08.879452] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:20.340 [2024-11-26 13:41:08.891645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.340 [2024-11-26 13:41:08.891676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:20.340 [2024-11-26 13:41:08.891686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.193 ms 00:28:20.340 [2024-11-26 13:41:08.891693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.340 [2024-11-26 13:41:08.898585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.340 [2024-11-26 13:41:08.898615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:20.340 [2024-11-26 13:41:08.898623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:20.340 [2024-11-26 13:41:08.898629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.340 [2024-11-26 13:41:08.898863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.340 [2024-11-26 13:41:08.898872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:20.340 [2024-11-26 13:41:08.898879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:28:20.340 [2024-11-26 13:41:08.898885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.340 [2024-11-26 13:41:08.898922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.340 [2024-11-26 13:41:08.898929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:20.340 [2024-11-26 13:41:08.898935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:20.340 [2024-11-26 13:41:08.898940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.341 [2024-11-26 13:41:08.898957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.341 [2024-11-26 13:41:08.898963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:20.341 [2024-11-26 13:41:08.898969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:20.341 [2024-11-26 13:41:08.898975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.341 [2024-11-26 13:41:08.898990] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:20.341 [2024-11-26 13:41:08.901306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.341 [2024-11-26 13:41:08.901424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:20.341 [2024-11-26 13:41:08.901439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.319 ms 00:28:20.341 [2024-11-26 13:41:08.901473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.341 [2024-11-26 13:41:08.901493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.341 [2024-11-26 13:41:08.901500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:20.341 [2024-11-26 13:41:08.901506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:20.341 [2024-11-26 13:41:08.901511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.341 [2024-11-26 13:41:08.901528] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:20.341 [2024-11-26 13:41:08.901542] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:20.341 [2024-11-26 13:41:08.901567] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:20.341 [2024-11-26 13:41:08.901580] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:20.341 [2024-11-26 13:41:08.901658] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:20.341 [2024-11-26 13:41:08.901666] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:20.341 [2024-11-26 13:41:08.901673] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:20.341 [2024-11-26 13:41:08.901680] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:20.341 [2024-11-26 13:41:08.901687] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:20.341 [2024-11-26 13:41:08.901693] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:20.341 [2024-11-26 13:41:08.901699] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:20.341 [2024-11-26 13:41:08.901704] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:20.341 [2024-11-26 13:41:08.901709] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:20.341 [2024-11-26 13:41:08.901717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.341 [2024-11-26 13:41:08.901722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:20.341 [2024-11-26 13:41:08.901728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:28:20.341 [2024-11-26 13:41:08.901733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.341 [2024-11-26 13:41:08.901797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.341 [2024-11-26 13:41:08.901803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:20.341 [2024-11-26 13:41:08.901808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:20.341 [2024-11-26 13:41:08.901813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.341 [2024-11-26 13:41:08.901888] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:20.341 [2024-11-26 13:41:08.901897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:20.341 [2024-11-26 13:41:08.901903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:20.341 [2024-11-26 13:41:08.901908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.901914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:20.341 [2024-11-26 13:41:08.901919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.901924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:20.341 [2024-11-26 13:41:08.901930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:20.341 [2024-11-26 13:41:08.901936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:20.341 [2024-11-26 13:41:08.901941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.901946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:20.341 [2024-11-26 13:41:08.901951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:20.341 [2024-11-26 13:41:08.901956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.901961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:20.341 [2024-11-26 13:41:08.901967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:20.341 [2024-11-26 13:41:08.901972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.901978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:20.341 [2024-11-26 13:41:08.901983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:20.341 [2024-11-26 13:41:08.901988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.901993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:20.341 [2024-11-26 13:41:08.901998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:20.341 [2024-11-26 13:41:08.902007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:20.341 [2024-11-26 13:41:08.902013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:20.341 [2024-11-26 13:41:08.902018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:20.341 [2024-11-26 13:41:08.902022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:20.341 [2024-11-26 13:41:08.902027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:20.341 [2024-11-26 13:41:08.902033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:20.341 [2024-11-26 13:41:08.902038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:20.341 [2024-11-26 13:41:08.902043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:20.341 [2024-11-26 13:41:08.902047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:20.341 [2024-11-26 13:41:08.902052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:20.341 [2024-11-26 13:41:08.902057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:20.341 [2024-11-26 13:41:08.902062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:20.341 [2024-11-26 13:41:08.902067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.902073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:20.341 [2024-11-26 13:41:08.902078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:20.341 [2024-11-26 13:41:08.902083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.902089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:20.341 [2024-11-26 13:41:08.902093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:20.341 [2024-11-26 13:41:08.902098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.902103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:20.341 [2024-11-26 13:41:08.902108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:20.341 [2024-11-26 13:41:08.902113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.902118] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:20.341 [2024-11-26 13:41:08.902124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:20.341 [2024-11-26 13:41:08.902129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:20.341 [2024-11-26 13:41:08.902135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.341 [2024-11-26 13:41:08.902141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:20.341 [2024-11-26 13:41:08.902147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:20.341 [2024-11-26 13:41:08.902152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:20.341 [2024-11-26 13:41:08.902158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:20.341 [2024-11-26 13:41:08.902162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:20.341 [2024-11-26 13:41:08.902167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:20.341 [2024-11-26 13:41:08.902174] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:20.341 [2024-11-26 13:41:08.902180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.341 [2024-11-26 13:41:08.902187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:20.341 [2024-11-26 13:41:08.902192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:20.341 [2024-11-26 13:41:08.902198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:20.341 [2024-11-26 13:41:08.902203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:20.341 [2024-11-26 13:41:08.902209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:20.341 [2024-11-26 13:41:08.902214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:20.341 [2024-11-26 13:41:08.902219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:20.341 [2024-11-26 13:41:08.902225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:20.341 [2024-11-26 13:41:08.902230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:20.341 [2024-11-26 13:41:08.902235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:20.341 [2024-11-26 13:41:08.902241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:20.342 [2024-11-26 13:41:08.902246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:20.342 [2024-11-26 13:41:08.902252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:20.342 [2024-11-26 13:41:08.902257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:20.342 [2024-11-26 13:41:08.902263] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:20.342 [2024-11-26 13:41:08.902268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.342 [2024-11-26 13:41:08.902276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:20.342 [2024-11-26 13:41:08.902282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:20.342 [2024-11-26 13:41:08.902287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:20.342 [2024-11-26 13:41:08.902293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:20.342 [2024-11-26 13:41:08.902298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.342 [2024-11-26 13:41:08.902304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:20.342 [2024-11-26 13:41:08.902309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.462 ms 00:28:20.342 [2024-11-26 13:41:08.902315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:08.921409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:08.921533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:20.601 [2024-11-26 13:41:08.921596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.057 ms 00:28:20.601 [2024-11-26 13:41:08.921622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:08.921661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:08.921677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:20.601 [2024-11-26 13:41:08.921693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:20.601 [2024-11-26 13:41:08.921707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:08.945676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:08.945780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:20.601 [2024-11-26 13:41:08.945829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.919 ms 00:28:20.601 [2024-11-26 13:41:08.945852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:08.945887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:08.945904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:20.601 [2024-11-26 13:41:08.945918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:20.601 [2024-11-26 13:41:08.945937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:08.946017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:08.946037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:20.601 [2024-11-26 13:41:08.946106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:20.601 [2024-11-26 13:41:08.946134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:08.946185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:08.946202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:20.601 [2024-11-26 13:41:08.946217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:20.601 [2024-11-26 13:41:08.946230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:08.957645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:08.957742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:20.601 [2024-11-26 13:41:08.957799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.387 ms 00:28:20.601 [2024-11-26 13:41:08.957827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:08.957908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:08.957950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:20.601 [2024-11-26 13:41:08.957966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:20.601 [2024-11-26 13:41:08.957981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:08.978057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:08.978224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:20.601 [2024-11-26 13:41:08.978300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.053 ms 00:28:20.601 [2024-11-26 13:41:08.978322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:08.988550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:08.988649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:20.601 [2024-11-26 13:41:08.988704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.399 ms 00:28:20.601 [2024-11-26 13:41:08.988728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:09.031652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:09.031800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:20.601 [2024-11-26 13:41:09.031855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.860 ms 00:28:20.601 [2024-11-26 13:41:09.031879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:09.031989] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:20.601 [2024-11-26 13:41:09.032083] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:20.601 [2024-11-26 13:41:09.032295] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:20.601 [2024-11-26 13:41:09.032384] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:20.601 [2024-11-26 13:41:09.032460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:09.032482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:20.601 [2024-11-26 13:41:09.032498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:28:20.601 [2024-11-26 13:41:09.032512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:09.032597] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:20.601 [2024-11-26 13:41:09.032670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:09.032714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:20.601 [2024-11-26 13:41:09.032794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:28:20.601 [2024-11-26 13:41:09.032816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:09.044066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:09.044168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:20.601 [2024-11-26 13:41:09.044218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.196 ms 00:28:20.601 [2024-11-26 13:41:09.044243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:09.050756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:09.050850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:20.601 [2024-11-26 13:41:09.050906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:20.601 [2024-11-26 13:41:09.050928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.601 [2024-11-26 13:41:09.051004] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:20.601 [2024-11-26 13:41:09.051224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.601 [2024-11-26 13:41:09.051260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:20.601 [2024-11-26 13:41:09.051366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.221 ms 00:28:20.601 [2024-11-26 13:41:09.051386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.167 [2024-11-26 13:41:09.663505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.167 [2024-11-26 13:41:09.663711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:21.167 [2024-11-26 13:41:09.663779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 611.420 ms 00:28:21.167 [2024-11-26 13:41:09.663805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.167 [2024-11-26 13:41:09.667615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.167 [2024-11-26 13:41:09.667738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:21.167 [2024-11-26 13:41:09.667791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.839 ms 00:28:21.167 [2024-11-26 13:41:09.667820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.167 [2024-11-26 13:41:09.668615] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:21.167 [2024-11-26 13:41:09.668678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.167 [2024-11-26 13:41:09.668740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:21.167 [2024-11-26 13:41:09.668850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.818 ms 00:28:21.167 [2024-11-26 13:41:09.668881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.167 [2024-11-26 13:41:09.669016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.167 [2024-11-26 13:41:09.669049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:21.167 [2024-11-26 13:41:09.669069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:21.167 [2024-11-26 13:41:09.669094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.167 [2024-11-26 13:41:09.669144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 618.132 ms, result 0 00:28:21.167 [2024-11-26 13:41:09.669204] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:21.167 [2024-11-26 13:41:09.669435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.167 [2024-11-26 13:41:09.669477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:21.167 [2024-11-26 13:41:09.669497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.232 ms 00:28:21.167 [2024-11-26 13:41:09.669516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.761 [2024-11-26 13:41:10.239120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.761 [2024-11-26 13:41:10.239186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:21.761 [2024-11-26 13:41:10.239217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 568.601 ms 00:28:21.761 [2024-11-26 13:41:10.239226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.761 [2024-11-26 13:41:10.243638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.243670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:21.762 [2024-11-26 13:41:10.243680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.430 ms 00:28:21.762 [2024-11-26 13:41:10.243687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.244658] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:21.762 [2024-11-26 13:41:10.244723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.244731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:21.762 [2024-11-26 13:41:10.244739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.008 ms 00:28:21.762 [2024-11-26 13:41:10.244747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.244778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.244787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:21.762 [2024-11-26 13:41:10.244795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:21.762 [2024-11-26 13:41:10.244802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.244836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 575.627 ms, result 0 00:28:21.762 [2024-11-26 13:41:10.244876] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:21.762 [2024-11-26 13:41:10.244886] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:21.762 [2024-11-26 13:41:10.244895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.244903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:21.762 [2024-11-26 13:41:10.244911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1193.903 ms 00:28:21.762 [2024-11-26 13:41:10.244918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.244946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.244957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:21.762 [2024-11-26 13:41:10.244965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:21.762 [2024-11-26 13:41:10.244973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.255667] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:21.762 [2024-11-26 13:41:10.255874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.255890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:21.762 [2024-11-26 13:41:10.255900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.886 ms 00:28:21.762 [2024-11-26 13:41:10.255908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.256581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.256597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:21.762 [2024-11-26 13:41:10.256609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.608 ms 00:28:21.762 [2024-11-26 13:41:10.256616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.258834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.258941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:21.762 [2024-11-26 13:41:10.258954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.201 ms 00:28:21.762 [2024-11-26 13:41:10.258963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.259003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.259011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:21.762 [2024-11-26 13:41:10.259018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:21.762 [2024-11-26 13:41:10.259029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.259129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.259138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:21.762 [2024-11-26 13:41:10.259146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:21.762 [2024-11-26 13:41:10.259152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.259172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.259179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:21.762 [2024-11-26 13:41:10.259186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:21.762 [2024-11-26 13:41:10.259193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.259220] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:21.762 [2024-11-26 13:41:10.259228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.259235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:21.762 [2024-11-26 13:41:10.259242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:21.762 [2024-11-26 13:41:10.259249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.259300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.762 [2024-11-26 13:41:10.259313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:21.762 [2024-11-26 13:41:10.259320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:21.762 [2024-11-26 13:41:10.259327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.762 [2024-11-26 13:41:10.260244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1381.579 ms, result 0 00:28:21.762 [2024-11-26 13:41:10.272599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.762 [2024-11-26 13:41:10.288587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:21.762 [2024-11-26 13:41:10.296710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:22.043 13:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.043 13:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:22.043 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:22.043 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:22.043 Validate MD5 checksum, iteration 1 00:28:22.043 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:22.044 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:22.044 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:22.044 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:22.044 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:22.044 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:22.044 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:22.044 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:22.044 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:22.044 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:22.044 13:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:22.044 [2024-11-26 13:41:10.518457] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:28:22.044 [2024-11-26 13:41:10.518695] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82248 ] 00:28:22.331 [2024-11-26 13:41:10.678112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.331 [2024-11-26 13:41:10.787885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.247  [2024-11-26T13:41:13.388Z] Copying: 592/1024 [MB] (592 MBps) [2024-11-26T13:41:14.768Z] Copying: 1024/1024 [MB] (average 576 MBps) 00:28:26.198 00:28:26.198 13:41:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:26.198 13:41:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:28.097 Validate MD5 checksum, iteration 2 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7726a6741bea2403c938f2e0ae7b4db3 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7726a6741bea2403c938f2e0ae7b4db3 != \7\7\2\6\a\6\7\4\1\b\e\a\2\4\0\3\c\9\3\8\f\2\e\0\a\e\7\b\4\d\b\3 ]] 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:28.097 13:41:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:28.355 [2024-11-26 13:41:16.669271] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:28:28.355 [2024-11-26 13:41:16.669376] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82316 ] 00:28:28.355 [2024-11-26 13:41:16.829002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.612 [2024-11-26 13:41:16.932574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.984  [2024-11-26T13:41:19.118Z] Copying: 675/1024 [MB] (675 MBps) [2024-11-26T13:41:24.372Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:28:35.802 00:28:36.060 13:41:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:36.060 13:41:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1fe20e1abaa0c5c3db4dd62f65856d42 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1fe20e1abaa0c5c3db4dd62f65856d42 != \1\f\e\2\0\e\1\a\b\a\a\0\c\5\c\3\d\b\4\d\d\6\2\f\6\5\8\5\6\d\4\2 ]] 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82213 ]] 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82213 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82213 ']' 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82213 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82213 00:28:37.961 killing process with pid 82213 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82213' 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82213 00:28:37.961 13:41:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82213 00:28:38.528 [2024-11-26 13:41:27.045327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:38.528 [2024-11-26 13:41:27.057727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.528 [2024-11-26 13:41:27.057764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:38.528 [2024-11-26 13:41:27.057774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:38.528 [2024-11-26 13:41:27.057781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.528 [2024-11-26 13:41:27.057799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:38.528 [2024-11-26 13:41:27.059966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.528 [2024-11-26 13:41:27.059991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:38.528 [2024-11-26 13:41:27.060003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.156 ms 00:28:38.528 [2024-11-26 13:41:27.060010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.528 [2024-11-26 13:41:27.060188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.528 [2024-11-26 13:41:27.060197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:38.528 [2024-11-26 13:41:27.060204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:28:38.528 [2024-11-26 13:41:27.060210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.528 [2024-11-26 13:41:27.061131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.528 [2024-11-26 13:41:27.061152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:38.528 [2024-11-26 13:41:27.061160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.910 ms 00:28:38.528 [2024-11-26 13:41:27.061169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.528 [2024-11-26 13:41:27.062068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.528 [2024-11-26 13:41:27.062088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:38.529 [2024-11-26 13:41:27.062096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.877 ms 00:28:38.529 [2024-11-26 13:41:27.062103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.529 [2024-11-26 13:41:27.069418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.529 [2024-11-26 13:41:27.069454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:38.529 [2024-11-26 13:41:27.069462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.288 ms 00:28:38.529 [2024-11-26 13:41:27.069472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.529 [2024-11-26 13:41:27.073562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.529 [2024-11-26 13:41:27.073591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:38.529 [2024-11-26 13:41:27.073599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.063 ms 00:28:38.529 [2024-11-26 13:41:27.073606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.529 [2024-11-26 13:41:27.073665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.529 [2024-11-26 13:41:27.073673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:38.529 [2024-11-26 13:41:27.073679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:38.529 [2024-11-26 13:41:27.073689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.529 [2024-11-26 13:41:27.081104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.529 [2024-11-26 13:41:27.081130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:38.529 [2024-11-26 13:41:27.081137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.403 ms 00:28:38.529 [2024-11-26 13:41:27.081143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.529 [2024-11-26 13:41:27.088243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.529 [2024-11-26 13:41:27.088268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:38.529 [2024-11-26 13:41:27.088275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.075 ms 00:28:38.529 [2024-11-26 13:41:27.088281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.095222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.788 [2024-11-26 13:41:27.095247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:38.788 [2024-11-26 13:41:27.095254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.917 ms 00:28:38.788 [2024-11-26 13:41:27.095260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.102339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.788 [2024-11-26 13:41:27.102366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:38.788 [2024-11-26 13:41:27.102372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.036 ms 00:28:38.788 [2024-11-26 13:41:27.102378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.102402] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:38.788 [2024-11-26 13:41:27.102413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:38.788 [2024-11-26 13:41:27.102421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:38.788 [2024-11-26 13:41:27.102428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:38.788 [2024-11-26 13:41:27.102434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:38.788 [2024-11-26 13:41:27.102535] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:38.788 [2024-11-26 13:41:27.102541] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f1485b81-0e2e-45c5-a325-46ffaa20f8e4 00:28:38.788 [2024-11-26 13:41:27.102547] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:38.788 [2024-11-26 13:41:27.102553] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:38.788 [2024-11-26 13:41:27.102558] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:38.788 [2024-11-26 13:41:27.102564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:38.788 [2024-11-26 13:41:27.102569] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:38.788 [2024-11-26 13:41:27.102575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:38.788 [2024-11-26 13:41:27.102581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:38.788 [2024-11-26 13:41:27.102586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:38.788 [2024-11-26 13:41:27.102591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:38.788 [2024-11-26 13:41:27.102596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.788 [2024-11-26 13:41:27.102605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:38.788 [2024-11-26 13:41:27.102612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:28:38.788 [2024-11-26 13:41:27.102618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.112365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.788 [2024-11-26 13:41:27.112390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:38.788 [2024-11-26 13:41:27.112399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.735 ms 00:28:38.788 [2024-11-26 13:41:27.112405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.112690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:38.788 [2024-11-26 13:41:27.112698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:38.788 [2024-11-26 13:41:27.112704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:28:38.788 [2024-11-26 13:41:27.112709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.146503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.788 [2024-11-26 13:41:27.146537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:38.788 [2024-11-26 13:41:27.146545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.788 [2024-11-26 13:41:27.146552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.146584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.788 [2024-11-26 13:41:27.146591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:38.788 [2024-11-26 13:41:27.146597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.788 [2024-11-26 13:41:27.146603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.146681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.788 [2024-11-26 13:41:27.146690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:38.788 [2024-11-26 13:41:27.146696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.788 [2024-11-26 13:41:27.146702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.146717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.788 [2024-11-26 13:41:27.146724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:38.788 [2024-11-26 13:41:27.146729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.788 [2024-11-26 13:41:27.146735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.208721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.788 [2024-11-26 13:41:27.208765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:38.788 [2024-11-26 13:41:27.208774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.788 [2024-11-26 13:41:27.208780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.258865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.788 [2024-11-26 13:41:27.258904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:38.788 [2024-11-26 13:41:27.258913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.788 [2024-11-26 13:41:27.258919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.258977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.788 [2024-11-26 13:41:27.258985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:38.788 [2024-11-26 13:41:27.258991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.788 [2024-11-26 13:41:27.258997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.259039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.788 [2024-11-26 13:41:27.259053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:38.788 [2024-11-26 13:41:27.259062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.788 [2024-11-26 13:41:27.259067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.259138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.788 [2024-11-26 13:41:27.259145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:38.788 [2024-11-26 13:41:27.259151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.788 [2024-11-26 13:41:27.259157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.259179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.788 [2024-11-26 13:41:27.259186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:38.788 [2024-11-26 13:41:27.259194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.788 [2024-11-26 13:41:27.259199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.788 [2024-11-26 13:41:27.259227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.789 [2024-11-26 13:41:27.259234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:38.789 [2024-11-26 13:41:27.259240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.789 [2024-11-26 13:41:27.259246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.789 [2024-11-26 13:41:27.259278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:38.789 [2024-11-26 13:41:27.259285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:38.789 [2024-11-26 13:41:27.259293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:38.789 [2024-11-26 13:41:27.259298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:38.789 [2024-11-26 13:41:27.259390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 201.642 ms, result 0 00:28:39.354 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:39.354 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:39.354 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:39.354 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:39.354 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:39.354 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:39.354 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:39.354 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:39.354 Remove shared memory files 00:28:39.354 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:39.354 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:39.355 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81966 00:28:39.355 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:39.355 13:41:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:39.355 00:28:39.355 real 1m27.084s 00:28:39.355 user 2m0.576s 00:28:39.355 sys 0m18.019s 00:28:39.355 13:41:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.355 13:41:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:39.355 ************************************ 00:28:39.355 END TEST ftl_upgrade_shutdown 00:28:39.355 ************************************ 00:28:39.614 13:41:27 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:39.614 13:41:27 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:39.614 13:41:27 ftl -- ftl/ftl.sh@14 -- # killprocess 74992 00:28:39.614 13:41:27 ftl -- common/autotest_common.sh@954 -- # '[' -z 74992 ']' 00:28:39.614 13:41:27 ftl -- common/autotest_common.sh@958 -- # kill -0 74992 00:28:39.614 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74992) - No such process 00:28:39.614 Process with pid 74992 is not found 00:28:39.614 13:41:27 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74992 is not found' 00:28:39.614 13:41:27 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:39.614 13:41:27 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82468 00:28:39.614 13:41:27 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82468 00:28:39.614 13:41:27 ftl -- common/autotest_common.sh@835 -- # '[' -z 82468 ']' 00:28:39.614 13:41:27 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.614 13:41:27 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.614 13:41:27 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.614 13:41:27 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.614 13:41:27 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.614 13:41:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:39.614 [2024-11-26 13:41:28.009721] Starting SPDK v25.01-pre git sha1 a9e1e4309 / DPDK 24.03.0 initialization... 00:28:39.614 [2024-11-26 13:41:28.009846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82468 ] 00:28:39.614 [2024-11-26 13:41:28.165420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.872 [2024-11-26 13:41:28.244651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.439 13:41:28 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.439 13:41:28 ftl -- common/autotest_common.sh@868 -- # return 0 00:28:40.439 13:41:28 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:40.439 nvme0n1 00:28:40.697 13:41:29 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:40.697 13:41:29 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:40.697 13:41:29 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:40.697 13:41:29 ftl -- ftl/common.sh@28 -- # stores=b9b8c808-d6f4-46f7-950f-9f765f80950c 00:28:40.697 13:41:29 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:40.697 13:41:29 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9b8c808-d6f4-46f7-950f-9f765f80950c 00:28:40.955 13:41:29 ftl -- ftl/ftl.sh@23 -- # killprocess 82468 00:28:40.955 13:41:29 ftl -- common/autotest_common.sh@954 -- # '[' -z 82468 ']' 00:28:40.955 13:41:29 ftl -- common/autotest_common.sh@958 -- # kill -0 82468 00:28:40.955 13:41:29 ftl -- common/autotest_common.sh@959 -- # uname 00:28:40.955 13:41:29 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.955 13:41:29 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82468 00:28:40.955 13:41:29 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:40.955 13:41:29 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:40.955 13:41:29 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82468' 00:28:40.955 killing process with pid 82468 00:28:40.955 13:41:29 ftl -- common/autotest_common.sh@973 -- # kill 82468 00:28:40.955 13:41:29 ftl -- common/autotest_common.sh@978 -- # wait 82468 00:28:42.329 13:41:30 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:42.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:42.329 Waiting for block devices as requested 00:28:42.329 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:42.588 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:42.588 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:42.588 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:47.852 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:47.852 13:41:36 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:47.852 13:41:36 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:47.852 Remove shared memory files 00:28:47.852 13:41:36 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:47.852 13:41:36 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:47.852 13:41:36 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:47.852 13:41:36 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:47.852 13:41:36 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:47.852 00:28:47.852 real 11m1.429s 00:28:47.852 user 13m20.486s 00:28:47.852 sys 1m13.457s 00:28:47.852 13:41:36 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.852 13:41:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:47.852 ************************************ 00:28:47.852 END TEST ftl 00:28:47.852 ************************************ 00:28:47.852 13:41:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:47.852 13:41:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:47.852 13:41:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:47.852 13:41:36 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:47.852 13:41:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:47.852 13:41:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:47.852 13:41:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:47.852 13:41:36 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:28:47.852 13:41:36 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:28:47.852 13:41:36 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:28:47.852 13:41:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.852 13:41:36 -- common/autotest_common.sh@10 -- # set +x 00:28:47.852 13:41:36 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:28:47.852 13:41:36 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:28:47.852 13:41:36 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:28:47.852 13:41:36 -- common/autotest_common.sh@10 -- # set +x 00:28:48.783 INFO: APP EXITING 00:28:48.783 INFO: killing all VMs 00:28:48.783 INFO: killing vhost app 00:28:48.783 INFO: EXIT DONE 00:28:49.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:49.299 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:49.299 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:49.299 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:49.557 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:49.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:50.074 Cleaning 00:28:50.074 Removing: /var/run/dpdk/spdk0/config 00:28:50.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:50.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:50.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:50.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:50.074 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:50.074 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:50.074 Removing: /var/run/dpdk/spdk0 00:28:50.074 Removing: /var/run/dpdk/spdk_pid56900 00:28:50.074 Removing: /var/run/dpdk/spdk_pid57091 00:28:50.074 Removing: /var/run/dpdk/spdk_pid57309 00:28:50.074 Removing: /var/run/dpdk/spdk_pid57402 00:28:50.074 Removing: /var/run/dpdk/spdk_pid57436 00:28:50.074 Removing: /var/run/dpdk/spdk_pid57559 00:28:50.074 Removing: /var/run/dpdk/spdk_pid57577 00:28:50.074 Removing: /var/run/dpdk/spdk_pid57770 00:28:50.074 Removing: /var/run/dpdk/spdk_pid57869 00:28:50.074 Removing: /var/run/dpdk/spdk_pid57965 00:28:50.074 Removing: /var/run/dpdk/spdk_pid58076 00:28:50.074 Removing: /var/run/dpdk/spdk_pid58167 00:28:50.074 Removing: /var/run/dpdk/spdk_pid58207 00:28:50.074 Removing: /var/run/dpdk/spdk_pid58243 00:28:50.074 Removing: /var/run/dpdk/spdk_pid58314 00:28:50.074 Removing: /var/run/dpdk/spdk_pid58409 00:28:50.074 Removing: /var/run/dpdk/spdk_pid58845 00:28:50.074 Removing: /var/run/dpdk/spdk_pid58898 00:28:50.074 Removing: /var/run/dpdk/spdk_pid58961 00:28:50.074 Removing: /var/run/dpdk/spdk_pid58977 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59068 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59084 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59175 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59191 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59244 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59262 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59315 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59333 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59482 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59519 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59602 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59774 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59857 00:28:50.074 Removing: /var/run/dpdk/spdk_pid59895 00:28:50.074 Removing: /var/run/dpdk/spdk_pid60319 00:28:50.074 Removing: /var/run/dpdk/spdk_pid60422 00:28:50.074 Removing: /var/run/dpdk/spdk_pid60533 00:28:50.074 Removing: /var/run/dpdk/spdk_pid60586 00:28:50.074 Removing: /var/run/dpdk/spdk_pid60606 00:28:50.074 Removing: /var/run/dpdk/spdk_pid60690 00:28:50.074 Removing: /var/run/dpdk/spdk_pid61308 00:28:50.074 Removing: /var/run/dpdk/spdk_pid61345 00:28:50.074 Removing: /var/run/dpdk/spdk_pid61815 00:28:50.074 Removing: /var/run/dpdk/spdk_pid61913 00:28:50.074 Removing: /var/run/dpdk/spdk_pid62023 00:28:50.074 Removing: /var/run/dpdk/spdk_pid62076 00:28:50.074 Removing: /var/run/dpdk/spdk_pid62096 00:28:50.074 Removing: /var/run/dpdk/spdk_pid62127 00:28:50.074 Removing: /var/run/dpdk/spdk_pid63963 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64095 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64104 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64116 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64158 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64162 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64174 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64219 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64223 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64235 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64280 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64284 00:28:50.074 Removing: /var/run/dpdk/spdk_pid64296 00:28:50.074 Removing: /var/run/dpdk/spdk_pid65691 00:28:50.074 Removing: /var/run/dpdk/spdk_pid65788 00:28:50.074 Removing: /var/run/dpdk/spdk_pid67186 00:28:50.074 Removing: /var/run/dpdk/spdk_pid68965 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69039 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69114 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69218 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69310 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69411 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69484 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69559 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69670 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69762 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69858 00:28:50.074 Removing: /var/run/dpdk/spdk_pid69926 00:28:50.074 Removing: /var/run/dpdk/spdk_pid70007 00:28:50.074 Removing: /var/run/dpdk/spdk_pid70112 00:28:50.074 Removing: /var/run/dpdk/spdk_pid70204 00:28:50.074 Removing: /var/run/dpdk/spdk_pid70304 00:28:50.074 Removing: /var/run/dpdk/spdk_pid70368 00:28:50.074 Removing: /var/run/dpdk/spdk_pid70452 00:28:50.074 Removing: /var/run/dpdk/spdk_pid70556 00:28:50.074 Removing: /var/run/dpdk/spdk_pid70648 00:28:50.333 Removing: /var/run/dpdk/spdk_pid70755 00:28:50.333 Removing: /var/run/dpdk/spdk_pid70818 00:28:50.333 Removing: /var/run/dpdk/spdk_pid70898 00:28:50.333 Removing: /var/run/dpdk/spdk_pid70972 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71046 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71144 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71240 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71335 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71409 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71482 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71554 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71634 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71737 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71833 00:28:50.333 Removing: /var/run/dpdk/spdk_pid71982 00:28:50.333 Removing: /var/run/dpdk/spdk_pid72266 00:28:50.333 Removing: /var/run/dpdk/spdk_pid72303 00:28:50.333 Removing: /var/run/dpdk/spdk_pid72755 00:28:50.333 Removing: /var/run/dpdk/spdk_pid72939 00:28:50.333 Removing: /var/run/dpdk/spdk_pid73033 00:28:50.333 Removing: /var/run/dpdk/spdk_pid73143 00:28:50.333 Removing: /var/run/dpdk/spdk_pid73195 00:28:50.333 Removing: /var/run/dpdk/spdk_pid73216 00:28:50.333 Removing: /var/run/dpdk/spdk_pid73515 00:28:50.333 Removing: /var/run/dpdk/spdk_pid73577 00:28:50.333 Removing: /var/run/dpdk/spdk_pid73645 00:28:50.333 Removing: /var/run/dpdk/spdk_pid74050 00:28:50.333 Removing: /var/run/dpdk/spdk_pid74193 00:28:50.333 Removing: /var/run/dpdk/spdk_pid74992 00:28:50.333 Removing: /var/run/dpdk/spdk_pid75124 00:28:50.333 Removing: /var/run/dpdk/spdk_pid75288 00:28:50.333 Removing: /var/run/dpdk/spdk_pid75391 00:28:50.333 Removing: /var/run/dpdk/spdk_pid75693 00:28:50.333 Removing: /var/run/dpdk/spdk_pid76002 00:28:50.333 Removing: /var/run/dpdk/spdk_pid76347 00:28:50.333 Removing: /var/run/dpdk/spdk_pid76526 00:28:50.333 Removing: /var/run/dpdk/spdk_pid76669 00:28:50.333 Removing: /var/run/dpdk/spdk_pid76716 00:28:50.333 Removing: /var/run/dpdk/spdk_pid76876 00:28:50.333 Removing: /var/run/dpdk/spdk_pid76907 00:28:50.333 Removing: /var/run/dpdk/spdk_pid76963 00:28:50.333 Removing: /var/run/dpdk/spdk_pid77219 00:28:50.333 Removing: /var/run/dpdk/spdk_pid77453 00:28:50.333 Removing: /var/run/dpdk/spdk_pid78135 00:28:50.333 Removing: /var/run/dpdk/spdk_pid78445 00:28:50.333 Removing: /var/run/dpdk/spdk_pid78924 00:28:50.333 Removing: /var/run/dpdk/spdk_pid79760 00:28:50.333 Removing: /var/run/dpdk/spdk_pid79885 00:28:50.333 Removing: /var/run/dpdk/spdk_pid79968 00:28:50.333 Removing: /var/run/dpdk/spdk_pid80378 00:28:50.333 Removing: /var/run/dpdk/spdk_pid80442 00:28:50.333 Removing: /var/run/dpdk/spdk_pid80781 00:28:50.333 Removing: /var/run/dpdk/spdk_pid81114 00:28:50.333 Removing: /var/run/dpdk/spdk_pid81470 00:28:50.333 Removing: /var/run/dpdk/spdk_pid81581 00:28:50.333 Removing: /var/run/dpdk/spdk_pid81624 00:28:50.333 Removing: /var/run/dpdk/spdk_pid81679 00:28:50.333 Removing: /var/run/dpdk/spdk_pid81734 00:28:50.333 Removing: /var/run/dpdk/spdk_pid81794 00:28:50.333 Removing: /var/run/dpdk/spdk_pid81966 00:28:50.333 Removing: /var/run/dpdk/spdk_pid82035 00:28:50.333 Removing: /var/run/dpdk/spdk_pid82106 00:28:50.333 Removing: /var/run/dpdk/spdk_pid82213 00:28:50.333 Removing: /var/run/dpdk/spdk_pid82248 00:28:50.333 Removing: /var/run/dpdk/spdk_pid82316 00:28:50.333 Removing: /var/run/dpdk/spdk_pid82468 00:28:50.333 Clean 00:28:50.333 13:41:38 -- common/autotest_common.sh@1453 -- # return 0 00:28:50.333 13:41:38 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:28:50.333 13:41:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.333 13:41:38 -- common/autotest_common.sh@10 -- # set +x 00:28:50.333 13:41:38 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:28:50.333 13:41:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.333 13:41:38 -- common/autotest_common.sh@10 -- # set +x 00:28:50.333 13:41:38 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:50.333 13:41:38 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:50.333 13:41:38 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:50.333 13:41:38 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:28:50.333 13:41:38 -- spdk/autotest.sh@398 -- # hostname 00:28:50.333 13:41:38 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:50.591 geninfo: WARNING: invalid characters removed from testname! 00:29:17.141 13:42:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:17.706 13:42:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:20.989 13:42:08 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:23.143 13:42:11 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:25.044 13:42:13 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:26.942 13:42:15 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:28.840 13:42:17 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:28.840 13:42:17 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:28.840 13:42:17 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:28.840 13:42:17 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:28.840 13:42:17 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:28.840 13:42:17 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:28.840 + [[ -n 5033 ]] 00:29:28.840 + sudo kill 5033 00:29:28.848 [Pipeline] } 00:29:28.863 [Pipeline] // timeout 00:29:28.868 [Pipeline] } 00:29:28.882 [Pipeline] // stage 00:29:28.887 [Pipeline] } 00:29:28.901 [Pipeline] // catchError 00:29:28.910 [Pipeline] stage 00:29:28.912 [Pipeline] { (Stop VM) 00:29:28.925 [Pipeline] sh 00:29:29.202 + vagrant halt 00:29:31.730 ==> default: Halting domain... 00:29:38.314 [Pipeline] sh 00:29:38.598 + vagrant destroy -f 00:29:41.142 ==> default: Removing domain... 00:29:41.731 [Pipeline] sh 00:29:42.022 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:29:42.035 [Pipeline] } 00:29:42.055 [Pipeline] // stage 00:29:42.064 [Pipeline] } 00:29:42.080 [Pipeline] // dir 00:29:42.089 [Pipeline] } 00:29:42.107 [Pipeline] // wrap 00:29:42.116 [Pipeline] } 00:29:42.133 [Pipeline] // catchError 00:29:42.143 [Pipeline] stage 00:29:42.145 [Pipeline] { (Epilogue) 00:29:42.162 [Pipeline] sh 00:29:42.442 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:47.720 [Pipeline] catchError 00:29:47.723 [Pipeline] { 00:29:47.737 [Pipeline] sh 00:29:48.015 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:48.015 Artifacts sizes are good 00:29:48.024 [Pipeline] } 00:29:48.039 [Pipeline] // catchError 00:29:48.052 [Pipeline] archiveArtifacts 00:29:48.059 Archiving artifacts 00:29:48.193 [Pipeline] cleanWs 00:29:48.287 [WS-CLEANUP] Deleting project workspace... 00:29:48.287 [WS-CLEANUP] Deferred wipeout is used... 00:29:48.355 [WS-CLEANUP] done 00:29:48.357 [Pipeline] } 00:29:48.373 [Pipeline] // stage 00:29:48.379 [Pipeline] } 00:29:48.393 [Pipeline] // node 00:29:48.399 [Pipeline] End of Pipeline 00:29:48.445 Finished: SUCCESS