00:00:00.000 Started by upstream project "autotest-per-patch" build number 132483 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.034 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.035 The recommended git tool is: git 00:00:00.035 using credential 00000000-0000-0000-0000-000000000002 00:00:00.037 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.062 Fetching changes from the remote Git repository 00:00:00.064 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.138 > git --version # 'git version 2.39.2' 00:00:00.138 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.182 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.182 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:19.834 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:19.847 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:19.860 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:19.860 > git config core.sparsecheckout # timeout=10 00:00:19.871 > git read-tree -mu HEAD # timeout=10 00:00:19.887 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:19.914 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:19.914 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:20.002 [Pipeline] Start of Pipeline 00:00:20.018 [Pipeline] library 00:00:20.020 Loading library shm_lib@master 00:00:20.020 Library shm_lib@master is cached. Copying from home. 00:00:20.038 [Pipeline] node 00:00:20.048 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:20.050 [Pipeline] { 00:00:20.061 [Pipeline] catchError 00:00:20.063 [Pipeline] { 00:00:20.076 [Pipeline] wrap 00:00:20.086 [Pipeline] { 00:00:20.094 [Pipeline] stage 00:00:20.096 [Pipeline] { (Prologue) 00:00:20.116 [Pipeline] echo 00:00:20.117 Node: VM-host-WFP1 00:00:20.124 [Pipeline] cleanWs 00:00:20.135 [WS-CLEANUP] Deleting project workspace... 00:00:20.135 [WS-CLEANUP] Deferred wipeout is used... 00:00:20.143 [WS-CLEANUP] done 00:00:20.352 [Pipeline] setCustomBuildProperty 00:00:20.446 [Pipeline] httpRequest 00:00:20.749 [Pipeline] echo 00:00:20.751 Sorcerer 10.211.164.20 is alive 00:00:20.763 [Pipeline] retry 00:00:20.765 [Pipeline] { 00:00:20.781 [Pipeline] httpRequest 00:00:20.787 HttpMethod: GET 00:00:20.787 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.788 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.789 Response Code: HTTP/1.1 200 OK 00:00:20.790 Success: Status code 200 is in the accepted range: 200,404 00:00:20.790 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.953 [Pipeline] } 00:00:20.971 [Pipeline] // retry 00:00:20.979 [Pipeline] sh 00:00:21.265 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.281 [Pipeline] httpRequest 00:00:21.584 [Pipeline] echo 00:00:21.586 Sorcerer 10.211.164.20 is alive 00:00:21.596 [Pipeline] retry 00:00:21.598 [Pipeline] { 00:00:21.609 [Pipeline] httpRequest 00:00:21.613 HttpMethod: GET 00:00:21.613 URL: http://10.211.164.20/packages/spdk_eb055bb93252b0fc9e854d82315bd3a3991825f9.tar.gz 00:00:21.614 Sending request to url: http://10.211.164.20/packages/spdk_eb055bb93252b0fc9e854d82315bd3a3991825f9.tar.gz 00:00:21.615 Response Code: HTTP/1.1 404 Not Found 00:00:21.615 Success: Status code 404 is in the accepted range: 200,404 00:00:21.616 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_eb055bb93252b0fc9e854d82315bd3a3991825f9.tar.gz 00:00:21.618 [Pipeline] } 00:00:21.633 [Pipeline] // retry 00:00:21.639 [Pipeline] sh 00:00:21.919 + rm -f spdk_eb055bb93252b0fc9e854d82315bd3a3991825f9.tar.gz 00:00:21.932 [Pipeline] retry 00:00:21.934 [Pipeline] { 00:00:21.956 [Pipeline] checkout 00:00:21.965 The recommended git tool is: NONE 00:00:21.977 using credential 00000000-0000-0000-0000-000000000002 00:00:21.979 Wiping out workspace first. 00:00:21.987 Cloning the remote Git repository 00:00:21.990 Honoring refspec on initial clone 00:00:21.994 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:21.994 > git init /var/jenkins/workspace/nvme-vg-autotest/spdk # timeout=10 00:00:22.004 Using reference repository: /var/ci_repos/spdk_multi 00:00:22.004 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:22.004 > git --version # timeout=10 00:00:22.009 > git --version # 'git version 2.25.1' 00:00:22.009 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:22.014 Setting http proxy: proxy-dmz.intel.com:911 00:00:22.014 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/71/25471/1 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:03:00.769 Avoid second fetch 00:03:00.790 Checking out Revision eb055bb93252b0fc9e854d82315bd3a3991825f9 (FETCH_HEAD) 00:03:01.066 Commit message: "blob: don't use bs_load_ctx_fail in bs_write_used_* functions" 00:03:00.742 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:03:00.749 > git config --add remote.origin.fetch refs/changes/71/25471/1 # timeout=10 00:03:00.756 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:03:00.770 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:03:00.782 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:03:00.792 > git config core.sparsecheckout # timeout=10 00:03:00.797 > git checkout -f eb055bb93252b0fc9e854d82315bd3a3991825f9 # timeout=10 00:03:01.068 > git rev-list --no-walk 08e207613fa05b54f22cb1d4f4747248b4f0633b # timeout=10 00:03:01.096 > git remote # timeout=10 00:03:01.101 > git submodule init # timeout=10 00:03:01.186 > git submodule sync # timeout=10 00:03:01.266 > git config --get remote.origin.url # timeout=10 00:03:01.275 > git submodule init # timeout=10 00:03:01.347 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:03:01.355 > git config --get submodule.dpdk.url # timeout=10 00:03:01.360 > git remote # timeout=10 00:03:01.368 > git config --get remote.origin.url # timeout=10 00:03:01.373 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:03:01.378 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:03:01.383 > git remote # timeout=10 00:03:01.389 > git config --get remote.origin.url # timeout=10 00:03:01.395 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:03:01.401 > git config --get submodule.isa-l.url # timeout=10 00:03:01.406 > git remote # timeout=10 00:03:01.412 > git config --get remote.origin.url # timeout=10 00:03:01.417 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:03:01.422 > git config --get submodule.ocf.url # timeout=10 00:03:01.427 > git remote # timeout=10 00:03:01.435 > git config --get remote.origin.url # timeout=10 00:03:01.440 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:03:01.445 > git config --get submodule.libvfio-user.url # timeout=10 00:03:01.451 > git remote # timeout=10 00:03:01.459 > git config --get remote.origin.url # timeout=10 00:03:01.464 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:03:01.469 > git config --get submodule.xnvme.url # timeout=10 00:03:01.474 > git remote # timeout=10 00:03:01.480 > git config --get remote.origin.url # timeout=10 00:03:01.485 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:03:01.490 > git config --get submodule.isa-l-crypto.url # timeout=10 00:03:01.495 > git remote # timeout=10 00:03:01.502 > git config --get remote.origin.url # timeout=10 00:03:01.507 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:03:01.515 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:01.515 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:01.515 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:01.515 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:01.515 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:01.515 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:01.515 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:03:01.520 Setting http proxy: proxy-dmz.intel.com:911 00:03:01.520 Setting http proxy: proxy-dmz.intel.com:911 00:03:01.520 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:03:01.520 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:03:01.520 Setting http proxy: proxy-dmz.intel.com:911 00:03:01.521 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:03:01.522 Setting http proxy: proxy-dmz.intel.com:911 00:03:01.522 Setting http proxy: proxy-dmz.intel.com:911 00:03:01.522 Setting http proxy: proxy-dmz.intel.com:911 00:03:01.522 Setting http proxy: proxy-dmz.intel.com:911 00:03:01.522 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:03:01.522 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:03:01.522 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:03:01.522 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:03:29.389 [Pipeline] dir 00:03:29.390 Running in /var/jenkins/workspace/nvme-vg-autotest/spdk 00:03:29.392 [Pipeline] { 00:03:29.409 [Pipeline] sh 00:03:29.696 ++ nproc 00:03:29.696 + threads=112 00:03:29.696 + git repack -a -d --threads=112 00:03:36.263 + git submodule foreach git repack -a -d --threads=112 00:03:36.263 Entering 'dpdk' 00:03:39.553 Entering 'intel-ipsec-mb' 00:03:39.813 Entering 'isa-l' 00:03:40.073 Entering 'isa-l-crypto' 00:03:40.073 Entering 'libvfio-user' 00:03:40.331 Entering 'ocf' 00:03:40.899 Entering 'xnvme' 00:03:40.899 + find .git -type f -name alternates -print -delete 00:03:40.899 .git/objects/info/alternates 00:03:40.899 .git/modules/isa-l-crypto/objects/info/alternates 00:03:40.900 .git/modules/ocf/objects/info/alternates 00:03:40.900 .git/modules/libvfio-user/objects/info/alternates 00:03:40.900 .git/modules/xnvme/objects/info/alternates 00:03:40.900 .git/modules/intel-ipsec-mb/objects/info/alternates 00:03:40.900 .git/modules/dpdk/objects/info/alternates 00:03:40.900 .git/modules/isa-l/objects/info/alternates 00:03:40.909 [Pipeline] } 00:03:40.927 [Pipeline] // dir 00:03:40.933 [Pipeline] } 00:03:40.949 [Pipeline] // retry 00:03:40.958 [Pipeline] sh 00:03:41.241 + hash pigz 00:03:41.241 + tar -czf spdk_eb055bb93252b0fc9e854d82315bd3a3991825f9.tar.gz spdk 00:03:53.479 [Pipeline] retry 00:03:53.480 [Pipeline] { 00:03:53.494 [Pipeline] httpRequest 00:03:53.500 HttpMethod: PUT 00:03:53.500 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_eb055bb93252b0fc9e854d82315bd3a3991825f9.tar.gz 00:03:53.501 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_eb055bb93252b0fc9e854d82315bd3a3991825f9.tar.gz 00:04:02.045 Response Code: HTTP/1.1 200 OK 00:04:02.057 Success: Status code 200 is in the accepted range: 200 00:04:02.059 [Pipeline] } 00:04:02.077 [Pipeline] // retry 00:04:02.084 [Pipeline] echo 00:04:02.085 00:04:02.085 Locking 00:04:02.085 Waited 6s for lock 00:04:02.085 File already exists: /storage/packages/spdk_eb055bb93252b0fc9e854d82315bd3a3991825f9.tar.gz 00:04:02.085 00:04:02.089 [Pipeline] sh 00:04:02.377 + git -C spdk log --oneline -n5 00:04:02.377 eb055bb93 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:04:02.377 71b34571d blob: fix possible memory leak in bs loading 00:04:02.377 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:04:02.377 9885e1d29 lib/blob: cluster_sz must be a multiple of PAGE 00:04:02.377 9a6847636 bdev/nvme: Fix spdk_bdev_nvme_create() 00:04:02.397 [Pipeline] writeFile 00:04:02.413 [Pipeline] sh 00:04:02.703 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:02.717 [Pipeline] sh 00:04:03.004 + cat autorun-spdk.conf 00:04:03.004 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:03.004 SPDK_TEST_NVME=1 00:04:03.004 SPDK_TEST_FTL=1 00:04:03.004 SPDK_TEST_ISAL=1 00:04:03.004 SPDK_RUN_ASAN=1 00:04:03.004 SPDK_RUN_UBSAN=1 00:04:03.004 SPDK_TEST_XNVME=1 00:04:03.004 SPDK_TEST_NVME_FDP=1 00:04:03.004 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:03.015 RUN_NIGHTLY=0 00:04:03.017 [Pipeline] } 00:04:03.031 [Pipeline] // stage 00:04:03.048 [Pipeline] stage 00:04:03.051 [Pipeline] { (Run VM) 00:04:03.064 [Pipeline] sh 00:04:03.356 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:03.356 + echo 'Start stage prepare_nvme.sh' 00:04:03.356 Start stage prepare_nvme.sh 00:04:03.356 + [[ -n 5 ]] 00:04:03.356 + disk_prefix=ex5 00:04:03.356 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:04:03.356 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:04:03.356 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:04:03.356 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:03.356 ++ SPDK_TEST_NVME=1 00:04:03.356 ++ SPDK_TEST_FTL=1 00:04:03.356 ++ SPDK_TEST_ISAL=1 00:04:03.356 ++ SPDK_RUN_ASAN=1 00:04:03.356 ++ SPDK_RUN_UBSAN=1 00:04:03.356 ++ SPDK_TEST_XNVME=1 00:04:03.356 ++ SPDK_TEST_NVME_FDP=1 00:04:03.356 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:03.356 ++ RUN_NIGHTLY=0 00:04:03.356 + cd /var/jenkins/workspace/nvme-vg-autotest 00:04:03.356 + nvme_files=() 00:04:03.356 + declare -A nvme_files 00:04:03.356 + backend_dir=/var/lib/libvirt/images/backends 00:04:03.356 + nvme_files['nvme.img']=5G 00:04:03.356 + nvme_files['nvme-cmb.img']=5G 00:04:03.356 + nvme_files['nvme-multi0.img']=4G 00:04:03.356 + nvme_files['nvme-multi1.img']=4G 00:04:03.356 + nvme_files['nvme-multi2.img']=4G 00:04:03.356 + nvme_files['nvme-openstack.img']=8G 00:04:03.356 + nvme_files['nvme-zns.img']=5G 00:04:03.356 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:03.356 + (( SPDK_TEST_FTL == 1 )) 00:04:03.356 + nvme_files["nvme-ftl.img"]=6G 00:04:03.356 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:03.356 + nvme_files["nvme-fdp.img"]=1G 00:04:03.356 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:03.356 + for nvme in "${!nvme_files[@]}" 00:04:03.356 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:04:03.356 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:03.356 + for nvme in "${!nvme_files[@]}" 00:04:03.356 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:04:03.356 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:04:03.356 + for nvme in "${!nvme_files[@]}" 00:04:03.356 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:04:03.356 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:03.356 + for nvme in "${!nvme_files[@]}" 00:04:03.356 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:04:03.356 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:03.356 + for nvme in "${!nvme_files[@]}" 00:04:03.356 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:04:03.356 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:03.356 + for nvme in "${!nvme_files[@]}" 00:04:03.356 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:04:03.623 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:03.623 + for nvme in "${!nvme_files[@]}" 00:04:03.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:04:03.623 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:03.623 + for nvme in "${!nvme_files[@]}" 00:04:03.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:04:03.623 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:04:03.623 + for nvme in "${!nvme_files[@]}" 00:04:03.623 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:04:03.623 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:03.623 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:04:03.623 + echo 'End stage prepare_nvme.sh' 00:04:03.623 End stage prepare_nvme.sh 00:04:03.651 [Pipeline] sh 00:04:03.941 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:03.941 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:04:03.941 00:04:03.941 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:04:03.941 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:04:03.941 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:04:03.941 HELP=0 00:04:03.941 DRY_RUN=0 00:04:03.941 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:04:03.941 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:04:03.941 NVME_AUTO_CREATE=0 00:04:03.941 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:04:03.941 NVME_CMB=,,,, 00:04:03.941 NVME_PMR=,,,, 00:04:03.941 NVME_ZNS=,,,, 00:04:03.941 NVME_MS=true,,,, 00:04:03.941 NVME_FDP=,,,on, 00:04:03.941 SPDK_VAGRANT_DISTRO=fedora39 00:04:03.941 SPDK_VAGRANT_VMCPU=10 00:04:03.941 SPDK_VAGRANT_VMRAM=12288 00:04:03.941 SPDK_VAGRANT_PROVIDER=libvirt 00:04:03.941 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:03.941 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:03.941 SPDK_OPENSTACK_NETWORK=0 00:04:03.941 VAGRANT_PACKAGE_BOX=0 00:04:03.942 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:03.942 FORCE_DISTRO=true 00:04:03.942 VAGRANT_BOX_VERSION= 00:04:03.942 EXTRA_VAGRANTFILES= 00:04:03.942 NIC_MODEL=e1000 00:04:03.942 00:04:03.942 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:04:03.942 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:04:06.478 Bringing machine 'default' up with 'libvirt' provider... 00:04:07.854 ==> default: Creating image (snapshot of base box volume). 00:04:08.111 ==> default: Creating domain with the following settings... 00:04:08.111 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732529354_15b0e6b4c8c3412bd99a 00:04:08.111 ==> default: -- Domain type: kvm 00:04:08.111 ==> default: -- Cpus: 10 00:04:08.111 ==> default: -- Feature: acpi 00:04:08.111 ==> default: -- Feature: apic 00:04:08.111 ==> default: -- Feature: pae 00:04:08.111 ==> default: -- Memory: 12288M 00:04:08.111 ==> default: -- Memory Backing: hugepages: 00:04:08.111 ==> default: -- Management MAC: 00:04:08.111 ==> default: -- Loader: 00:04:08.111 ==> default: -- Nvram: 00:04:08.111 ==> default: -- Base box: spdk/fedora39 00:04:08.111 ==> default: -- Storage pool: default 00:04:08.111 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732529354_15b0e6b4c8c3412bd99a.img (20G) 00:04:08.111 ==> default: -- Volume Cache: default 00:04:08.111 ==> default: -- Kernel: 00:04:08.111 ==> default: -- Initrd: 00:04:08.111 ==> default: -- Graphics Type: vnc 00:04:08.111 ==> default: -- Graphics Port: -1 00:04:08.111 ==> default: -- Graphics IP: 127.0.0.1 00:04:08.111 ==> default: -- Graphics Password: Not defined 00:04:08.111 ==> default: -- Video Type: cirrus 00:04:08.111 ==> default: -- Video VRAM: 9216 00:04:08.111 ==> default: -- Sound Type: 00:04:08.111 ==> default: -- Keymap: en-us 00:04:08.111 ==> default: -- TPM Path: 00:04:08.111 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:08.111 ==> default: -- Command line args: 00:04:08.111 ==> default: -> value=-device, 00:04:08.111 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:08.111 ==> default: -> value=-drive, 00:04:08.111 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:04:08.111 ==> default: -> value=-device, 00:04:08.111 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:04:08.111 ==> default: -> value=-device, 00:04:08.111 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:08.112 ==> default: -> value=-drive, 00:04:08.112 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:04:08.112 ==> default: -> value=-device, 00:04:08.112 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:08.112 ==> default: -> value=-device, 00:04:08.112 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:04:08.112 ==> default: -> value=-drive, 00:04:08.112 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:04:08.112 ==> default: -> value=-device, 00:04:08.112 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:08.112 ==> default: -> value=-drive, 00:04:08.112 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:04:08.112 ==> default: -> value=-device, 00:04:08.112 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:08.112 ==> default: -> value=-drive, 00:04:08.112 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:04:08.112 ==> default: -> value=-device, 00:04:08.112 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:08.112 ==> default: -> value=-device, 00:04:08.112 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:04:08.112 ==> default: -> value=-device, 00:04:08.112 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:04:08.112 ==> default: -> value=-drive, 00:04:08.112 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:04:08.112 ==> default: -> value=-device, 00:04:08.112 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:08.372 ==> default: Creating shared folders metadata... 00:04:08.372 ==> default: Starting domain. 00:04:10.336 ==> default: Waiting for domain to get an IP address... 00:04:25.250 ==> default: Waiting for SSH to become available... 00:04:26.632 ==> default: Configuring and enabling network interfaces... 00:04:33.210 default: SSH address: 192.168.121.109:22 00:04:33.210 default: SSH username: vagrant 00:04:33.210 default: SSH auth method: private key 00:04:35.751 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:43.871 ==> default: Mounting SSHFS shared folder... 00:04:46.410 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:46.410 ==> default: Checking Mount.. 00:04:48.317 ==> default: Folder Successfully Mounted! 00:04:48.317 ==> default: Running provisioner: file... 00:04:49.255 default: ~/.gitconfig => .gitconfig 00:04:49.827 00:04:49.827 SUCCESS! 00:04:49.827 00:04:49.827 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:49.827 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:49.827 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:49.827 00:04:49.933 [Pipeline] } 00:04:49.952 [Pipeline] // stage 00:04:49.961 [Pipeline] dir 00:04:49.962 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:04:49.964 [Pipeline] { 00:04:49.979 [Pipeline] catchError 00:04:49.982 [Pipeline] { 00:04:49.996 [Pipeline] sh 00:04:50.278 + vagrant ssh-config --host vagrant 00:04:50.278 + sed -ne /^Host/,$p 00:04:50.278 + tee ssh_conf 00:04:53.572 Host vagrant 00:04:53.572 HostName 192.168.121.109 00:04:53.572 User vagrant 00:04:53.572 Port 22 00:04:53.572 UserKnownHostsFile /dev/null 00:04:53.572 StrictHostKeyChecking no 00:04:53.572 PasswordAuthentication no 00:04:53.572 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:53.572 IdentitiesOnly yes 00:04:53.572 LogLevel FATAL 00:04:53.572 ForwardAgent yes 00:04:53.572 ForwardX11 yes 00:04:53.572 00:04:53.587 [Pipeline] withEnv 00:04:53.590 [Pipeline] { 00:04:53.604 [Pipeline] sh 00:04:53.886 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:53.887 source /etc/os-release 00:04:53.887 [[ -e /image.version ]] && img=$(< /image.version) 00:04:53.887 # Minimal, systemd-like check. 00:04:53.887 if [[ -e /.dockerenv ]]; then 00:04:53.887 # Clear garbage from the node's name: 00:04:53.887 # agt-er_autotest_547-896 -> autotest_547-896 00:04:53.887 # $HOSTNAME is the actual container id 00:04:53.887 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:53.887 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:53.887 # We can assume this is a mount from a host where container is running, 00:04:53.887 # so fetch its hostname to easily identify the target swarm worker. 00:04:53.887 container="$(< /etc/hostname) ($agent)" 00:04:53.887 else 00:04:53.887 # Fallback 00:04:53.887 container=$agent 00:04:53.887 fi 00:04:53.887 fi 00:04:53.887 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:53.887 00:04:54.155 [Pipeline] } 00:04:54.169 [Pipeline] // withEnv 00:04:54.177 [Pipeline] setCustomBuildProperty 00:04:54.190 [Pipeline] stage 00:04:54.192 [Pipeline] { (Tests) 00:04:54.208 [Pipeline] sh 00:04:54.488 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:54.761 [Pipeline] sh 00:04:55.043 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:55.316 [Pipeline] timeout 00:04:55.317 Timeout set to expire in 50 min 00:04:55.318 [Pipeline] { 00:04:55.333 [Pipeline] sh 00:04:55.619 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:56.186 HEAD is now at eb055bb93 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:04:56.198 [Pipeline] sh 00:04:56.478 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:56.749 [Pipeline] sh 00:04:57.088 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:57.365 [Pipeline] sh 00:04:57.770 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:58.031 ++ readlink -f spdk_repo 00:04:58.031 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:58.031 + [[ -n /home/vagrant/spdk_repo ]] 00:04:58.031 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:58.031 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:58.031 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:58.031 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:58.031 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:58.031 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:58.031 + cd /home/vagrant/spdk_repo 00:04:58.031 + source /etc/os-release 00:04:58.031 ++ NAME='Fedora Linux' 00:04:58.031 ++ VERSION='39 (Cloud Edition)' 00:04:58.031 ++ ID=fedora 00:04:58.031 ++ VERSION_ID=39 00:04:58.031 ++ VERSION_CODENAME= 00:04:58.031 ++ PLATFORM_ID=platform:f39 00:04:58.031 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:58.031 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:58.031 ++ LOGO=fedora-logo-icon 00:04:58.031 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:58.031 ++ HOME_URL=https://fedoraproject.org/ 00:04:58.031 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:58.031 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:58.031 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:58.031 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:58.031 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:58.031 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:58.031 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:58.031 ++ SUPPORT_END=2024-11-12 00:04:58.031 ++ VARIANT='Cloud Edition' 00:04:58.031 ++ VARIANT_ID=cloud 00:04:58.031 + uname -a 00:04:58.031 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:58.031 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:58.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.860 Hugepages 00:04:58.860 node hugesize free / total 00:04:58.860 node0 1048576kB 0 / 0 00:04:58.860 node0 2048kB 0 / 0 00:04:58.860 00:04:58.860 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:58.860 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:58.860 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:58.860 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:58.860 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:58.860 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:58.860 + rm -f /tmp/spdk-ld-path 00:04:58.860 + source autorun-spdk.conf 00:04:58.860 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:58.860 ++ SPDK_TEST_NVME=1 00:04:58.860 ++ SPDK_TEST_FTL=1 00:04:58.860 ++ SPDK_TEST_ISAL=1 00:04:58.860 ++ SPDK_RUN_ASAN=1 00:04:58.860 ++ SPDK_RUN_UBSAN=1 00:04:58.860 ++ SPDK_TEST_XNVME=1 00:04:58.860 ++ SPDK_TEST_NVME_FDP=1 00:04:58.860 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:58.860 ++ RUN_NIGHTLY=0 00:04:58.860 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:58.860 + [[ -n '' ]] 00:04:58.860 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:59.120 + for M in /var/spdk/build-*-manifest.txt 00:04:59.120 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:59.120 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:59.120 + for M in /var/spdk/build-*-manifest.txt 00:04:59.120 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:59.120 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:59.120 + for M in /var/spdk/build-*-manifest.txt 00:04:59.120 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:59.120 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:59.120 ++ uname 00:04:59.120 + [[ Linux == \L\i\n\u\x ]] 00:04:59.120 + sudo dmesg -T 00:04:59.120 + sudo dmesg --clear 00:04:59.120 + dmesg_pid=5244 00:04:59.120 + [[ Fedora Linux == FreeBSD ]] 00:04:59.120 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:59.120 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:59.120 + sudo dmesg -Tw 00:04:59.120 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:59.120 + [[ -x /usr/src/fio-static/fio ]] 00:04:59.120 + export FIO_BIN=/usr/src/fio-static/fio 00:04:59.120 + FIO_BIN=/usr/src/fio-static/fio 00:04:59.120 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:59.120 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:59.120 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:59.120 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:59.120 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:59.120 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:59.120 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:59.120 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:59.120 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:59.121 10:10:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:59.121 10:10:06 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:59.121 10:10:06 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:59.121 10:10:06 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:04:59.121 10:10:06 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:04:59.121 10:10:06 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:04:59.121 10:10:06 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:04:59.121 10:10:06 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:59.121 10:10:06 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:04:59.121 10:10:06 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:04:59.121 10:10:06 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:59.121 10:10:06 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:04:59.121 10:10:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:59.121 10:10:06 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:59.381 10:10:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:59.381 10:10:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.381 10:10:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:59.381 10:10:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:59.381 10:10:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.381 10:10:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.381 10:10:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.381 10:10:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.381 10:10:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.381 10:10:06 -- paths/export.sh@5 -- $ export PATH 00:04:59.381 10:10:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.381 10:10:06 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:59.381 10:10:06 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:59.381 10:10:06 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732529406.XXXXXX 00:04:59.381 10:10:06 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732529406.zGfQ4L 00:04:59.381 10:10:06 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:59.381 10:10:06 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:59.381 10:10:06 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:59.381 10:10:06 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:59.381 10:10:06 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:59.381 10:10:06 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:59.381 10:10:06 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:59.381 10:10:06 -- common/autotest_common.sh@10 -- $ set +x 00:04:59.381 10:10:06 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:59.381 10:10:06 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:59.381 10:10:06 -- pm/common@17 -- $ local monitor 00:04:59.381 10:10:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.381 10:10:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.381 10:10:06 -- pm/common@25 -- $ sleep 1 00:04:59.381 10:10:06 -- pm/common@21 -- $ date +%s 00:04:59.381 10:10:06 -- pm/common@21 -- $ date +%s 00:04:59.381 10:10:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732529406 00:04:59.381 10:10:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732529406 00:04:59.381 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732529406_collect-cpu-load.pm.log 00:04:59.381 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732529406_collect-vmstat.pm.log 00:05:00.320 10:10:07 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:00.320 10:10:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:00.320 10:10:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:00.320 10:10:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:00.320 10:10:07 -- spdk/autobuild.sh@16 -- $ date -u 00:05:00.320 Mon Nov 25 10:10:07 AM UTC 2024 00:05:00.320 10:10:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:00.320 v25.01-pre-238-geb055bb93 00:05:00.320 10:10:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:00.320 10:10:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:00.320 10:10:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:00.320 10:10:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:00.320 10:10:07 -- common/autotest_common.sh@10 -- $ set +x 00:05:00.320 ************************************ 00:05:00.320 START TEST asan 00:05:00.320 ************************************ 00:05:00.320 using asan 00:05:00.320 10:10:07 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:05:00.320 00:05:00.320 real 0m0.000s 00:05:00.320 user 0m0.000s 00:05:00.320 sys 0m0.000s 00:05:00.320 10:10:07 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:00.320 10:10:07 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:00.320 ************************************ 00:05:00.320 END TEST asan 00:05:00.320 ************************************ 00:05:00.580 10:10:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:00.580 10:10:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:00.580 10:10:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:00.580 10:10:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:00.580 10:10:07 -- common/autotest_common.sh@10 -- $ set +x 00:05:00.580 ************************************ 00:05:00.580 START TEST ubsan 00:05:00.580 ************************************ 00:05:00.580 using ubsan 00:05:00.580 10:10:07 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:00.580 00:05:00.580 real 0m0.000s 00:05:00.580 user 0m0.000s 00:05:00.580 sys 0m0.000s 00:05:00.580 10:10:07 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:00.580 10:10:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:00.580 ************************************ 00:05:00.580 END TEST ubsan 00:05:00.580 ************************************ 00:05:00.580 10:10:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:00.580 10:10:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:00.580 10:10:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:00.580 10:10:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:00.580 10:10:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:00.580 10:10:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:00.580 10:10:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:00.580 10:10:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:00.580 10:10:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:05:00.580 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:00.580 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:01.149 Using 'verbs' RDMA provider 00:05:17.426 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:35.531 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:35.531 Creating mk/config.mk...done. 00:05:35.531 Creating mk/cc.flags.mk...done. 00:05:35.531 Type 'make' to build. 00:05:35.531 10:10:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:35.531 10:10:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:35.531 10:10:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:35.531 10:10:40 -- common/autotest_common.sh@10 -- $ set +x 00:05:35.531 ************************************ 00:05:35.531 START TEST make 00:05:35.531 ************************************ 00:05:35.531 10:10:40 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:35.531 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:05:35.531 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:05:35.531 meson setup builddir \ 00:05:35.531 -Dwith-libaio=enabled \ 00:05:35.531 -Dwith-liburing=enabled \ 00:05:35.531 -Dwith-libvfn=disabled \ 00:05:35.531 -Dwith-spdk=disabled \ 00:05:35.531 -Dexamples=false \ 00:05:35.531 -Dtests=false \ 00:05:35.531 -Dtools=false && \ 00:05:35.531 meson compile -C builddir && \ 00:05:35.531 cd -) 00:05:35.531 make[1]: Nothing to be done for 'all'. 00:05:36.099 The Meson build system 00:05:36.099 Version: 1.5.0 00:05:36.099 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:05:36.099 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:36.099 Build type: native build 00:05:36.099 Project name: xnvme 00:05:36.099 Project version: 0.7.5 00:05:36.099 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:36.099 C linker for the host machine: cc ld.bfd 2.40-14 00:05:36.099 Host machine cpu family: x86_64 00:05:36.099 Host machine cpu: x86_64 00:05:36.099 Message: host_machine.system: linux 00:05:36.099 Compiler for C supports arguments -Wno-missing-braces: YES 00:05:36.099 Compiler for C supports arguments -Wno-cast-function-type: YES 00:05:36.099 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:05:36.099 Run-time dependency threads found: YES 00:05:36.099 Has header "setupapi.h" : NO 00:05:36.099 Has header "linux/blkzoned.h" : YES 00:05:36.099 Has header "linux/blkzoned.h" : YES (cached) 00:05:36.099 Has header "libaio.h" : YES 00:05:36.099 Library aio found: YES 00:05:36.099 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:36.100 Run-time dependency liburing found: YES 2.2 00:05:36.100 Dependency libvfn skipped: feature with-libvfn disabled 00:05:36.100 Found CMake: /usr/bin/cmake (3.27.7) 00:05:36.100 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:05:36.100 Subproject spdk : skipped: feature with-spdk disabled 00:05:36.100 Run-time dependency appleframeworks found: NO (tried framework) 00:05:36.100 Run-time dependency appleframeworks found: NO (tried framework) 00:05:36.100 Library rt found: YES 00:05:36.100 Checking for function "clock_gettime" with dependency -lrt: YES 00:05:36.100 Configuring xnvme_config.h using configuration 00:05:36.100 Configuring xnvme.spec using configuration 00:05:36.100 Run-time dependency bash-completion found: YES 2.11 00:05:36.100 Message: Bash-completions: /usr/share/bash-completion/completions 00:05:36.100 Program cp found: YES (/usr/bin/cp) 00:05:36.100 Build targets in project: 3 00:05:36.100 00:05:36.100 xnvme 0.7.5 00:05:36.100 00:05:36.100 Subprojects 00:05:36.100 spdk : NO Feature 'with-spdk' disabled 00:05:36.100 00:05:36.100 User defined options 00:05:36.100 examples : false 00:05:36.100 tests : false 00:05:36.100 tools : false 00:05:36.100 with-libaio : enabled 00:05:36.100 with-liburing: enabled 00:05:36.100 with-libvfn : disabled 00:05:36.100 with-spdk : disabled 00:05:36.100 00:05:36.100 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:36.666 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:05:36.666 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:05:36.666 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:05:36.666 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:05:36.666 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:05:36.666 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:05:36.666 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:05:36.666 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:05:36.666 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:05:36.666 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:05:36.666 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:05:36.666 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:05:36.666 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:05:36.666 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:05:36.666 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:05:36.666 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:05:36.666 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:05:36.924 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:05:36.925 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:05:36.925 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:05:36.925 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:05:36.925 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:05:36.925 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:05:36.925 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:05:36.925 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:05:36.925 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:05:36.925 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:05:36.925 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:05:36.925 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:05:36.925 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:05:36.925 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:05:36.925 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:05:36.925 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:05:36.925 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:05:36.925 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:05:36.925 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:05:36.925 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:05:36.925 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:05:36.925 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:05:36.925 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:05:36.925 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:05:36.925 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:05:36.925 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:05:36.925 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:05:36.925 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:05:36.925 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:05:36.925 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:05:36.925 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:05:36.925 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:05:36.925 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:05:36.925 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:05:37.183 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:05:37.183 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:05:37.183 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:05:37.183 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:05:37.183 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:05:37.183 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:05:37.183 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:05:37.183 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:05:37.183 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:05:37.183 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:05:37.183 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:05:37.183 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:05:37.183 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:05:37.183 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:05:37.183 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:05:37.183 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:05:37.183 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:05:37.183 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:05:37.183 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:05:37.462 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:05:37.462 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:05:37.462 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:05:37.462 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:05:37.721 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:05:37.721 [75/76] Linking static target lib/libxnvme.a 00:05:37.721 [76/76] Linking target lib/libxnvme.so.0.7.5 00:05:37.721 INFO: autodetecting backend as ninja 00:05:37.721 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:37.979 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:46.121 The Meson build system 00:05:46.121 Version: 1.5.0 00:05:46.121 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:46.121 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:46.121 Build type: native build 00:05:46.121 Program cat found: YES (/usr/bin/cat) 00:05:46.121 Project name: DPDK 00:05:46.121 Project version: 24.03.0 00:05:46.121 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:46.121 C linker for the host machine: cc ld.bfd 2.40-14 00:05:46.121 Host machine cpu family: x86_64 00:05:46.121 Host machine cpu: x86_64 00:05:46.121 Message: ## Building in Developer Mode ## 00:05:46.121 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:46.121 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:46.121 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:46.121 Program python3 found: YES (/usr/bin/python3) 00:05:46.121 Program cat found: YES (/usr/bin/cat) 00:05:46.121 Compiler for C supports arguments -march=native: YES 00:05:46.121 Checking for size of "void *" : 8 00:05:46.121 Checking for size of "void *" : 8 (cached) 00:05:46.121 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:46.121 Library m found: YES 00:05:46.121 Library numa found: YES 00:05:46.121 Has header "numaif.h" : YES 00:05:46.121 Library fdt found: NO 00:05:46.121 Library execinfo found: NO 00:05:46.121 Has header "execinfo.h" : YES 00:05:46.121 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:46.121 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:46.121 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:46.121 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:46.121 Run-time dependency openssl found: YES 3.1.1 00:05:46.121 Run-time dependency libpcap found: YES 1.10.4 00:05:46.121 Has header "pcap.h" with dependency libpcap: YES 00:05:46.121 Compiler for C supports arguments -Wcast-qual: YES 00:05:46.121 Compiler for C supports arguments -Wdeprecated: YES 00:05:46.121 Compiler for C supports arguments -Wformat: YES 00:05:46.121 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:46.121 Compiler for C supports arguments -Wformat-security: NO 00:05:46.121 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:46.121 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:46.121 Compiler for C supports arguments -Wnested-externs: YES 00:05:46.121 Compiler for C supports arguments -Wold-style-definition: YES 00:05:46.121 Compiler for C supports arguments -Wpointer-arith: YES 00:05:46.121 Compiler for C supports arguments -Wsign-compare: YES 00:05:46.121 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:46.121 Compiler for C supports arguments -Wundef: YES 00:05:46.121 Compiler for C supports arguments -Wwrite-strings: YES 00:05:46.121 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:46.121 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:46.121 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:46.121 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:46.121 Program objdump found: YES (/usr/bin/objdump) 00:05:46.121 Compiler for C supports arguments -mavx512f: YES 00:05:46.121 Checking if "AVX512 checking" compiles: YES 00:05:46.121 Fetching value of define "__SSE4_2__" : 1 00:05:46.121 Fetching value of define "__AES__" : 1 00:05:46.121 Fetching value of define "__AVX__" : 1 00:05:46.121 Fetching value of define "__AVX2__" : 1 00:05:46.121 Fetching value of define "__AVX512BW__" : 1 00:05:46.121 Fetching value of define "__AVX512CD__" : 1 00:05:46.121 Fetching value of define "__AVX512DQ__" : 1 00:05:46.121 Fetching value of define "__AVX512F__" : 1 00:05:46.121 Fetching value of define "__AVX512VL__" : 1 00:05:46.121 Fetching value of define "__PCLMUL__" : 1 00:05:46.121 Fetching value of define "__RDRND__" : 1 00:05:46.121 Fetching value of define "__RDSEED__" : 1 00:05:46.121 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:46.121 Fetching value of define "__znver1__" : (undefined) 00:05:46.121 Fetching value of define "__znver2__" : (undefined) 00:05:46.121 Fetching value of define "__znver3__" : (undefined) 00:05:46.121 Fetching value of define "__znver4__" : (undefined) 00:05:46.121 Library asan found: YES 00:05:46.121 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:46.121 Message: lib/log: Defining dependency "log" 00:05:46.121 Message: lib/kvargs: Defining dependency "kvargs" 00:05:46.121 Message: lib/telemetry: Defining dependency "telemetry" 00:05:46.121 Library rt found: YES 00:05:46.121 Checking for function "getentropy" : NO 00:05:46.121 Message: lib/eal: Defining dependency "eal" 00:05:46.121 Message: lib/ring: Defining dependency "ring" 00:05:46.121 Message: lib/rcu: Defining dependency "rcu" 00:05:46.121 Message: lib/mempool: Defining dependency "mempool" 00:05:46.121 Message: lib/mbuf: Defining dependency "mbuf" 00:05:46.121 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:46.121 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:46.121 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:46.121 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:46.121 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:46.121 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:46.121 Compiler for C supports arguments -mpclmul: YES 00:05:46.121 Compiler for C supports arguments -maes: YES 00:05:46.121 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:46.121 Compiler for C supports arguments -mavx512bw: YES 00:05:46.121 Compiler for C supports arguments -mavx512dq: YES 00:05:46.121 Compiler for C supports arguments -mavx512vl: YES 00:05:46.121 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:46.121 Compiler for C supports arguments -mavx2: YES 00:05:46.121 Compiler for C supports arguments -mavx: YES 00:05:46.121 Message: lib/net: Defining dependency "net" 00:05:46.121 Message: lib/meter: Defining dependency "meter" 00:05:46.121 Message: lib/ethdev: Defining dependency "ethdev" 00:05:46.121 Message: lib/pci: Defining dependency "pci" 00:05:46.121 Message: lib/cmdline: Defining dependency "cmdline" 00:05:46.121 Message: lib/hash: Defining dependency "hash" 00:05:46.121 Message: lib/timer: Defining dependency "timer" 00:05:46.121 Message: lib/compressdev: Defining dependency "compressdev" 00:05:46.121 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:46.121 Message: lib/dmadev: Defining dependency "dmadev" 00:05:46.121 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:46.121 Message: lib/power: Defining dependency "power" 00:05:46.121 Message: lib/reorder: Defining dependency "reorder" 00:05:46.121 Message: lib/security: Defining dependency "security" 00:05:46.121 Has header "linux/userfaultfd.h" : YES 00:05:46.121 Has header "linux/vduse.h" : YES 00:05:46.121 Message: lib/vhost: Defining dependency "vhost" 00:05:46.121 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:46.121 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:46.121 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:46.121 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:46.121 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:46.121 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:46.122 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:46.122 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:46.122 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:46.122 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:46.122 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:46.122 Configuring doxy-api-html.conf using configuration 00:05:46.122 Configuring doxy-api-man.conf using configuration 00:05:46.122 Program mandb found: YES (/usr/bin/mandb) 00:05:46.122 Program sphinx-build found: NO 00:05:46.122 Configuring rte_build_config.h using configuration 00:05:46.122 Message: 00:05:46.122 ================= 00:05:46.122 Applications Enabled 00:05:46.122 ================= 00:05:46.122 00:05:46.122 apps: 00:05:46.122 00:05:46.122 00:05:46.122 Message: 00:05:46.122 ================= 00:05:46.122 Libraries Enabled 00:05:46.122 ================= 00:05:46.122 00:05:46.122 libs: 00:05:46.122 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:46.122 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:46.122 cryptodev, dmadev, power, reorder, security, vhost, 00:05:46.122 00:05:46.122 Message: 00:05:46.122 =============== 00:05:46.122 Drivers Enabled 00:05:46.122 =============== 00:05:46.122 00:05:46.122 common: 00:05:46.122 00:05:46.122 bus: 00:05:46.122 pci, vdev, 00:05:46.122 mempool: 00:05:46.122 ring, 00:05:46.122 dma: 00:05:46.122 00:05:46.122 net: 00:05:46.122 00:05:46.122 crypto: 00:05:46.122 00:05:46.122 compress: 00:05:46.122 00:05:46.122 vdpa: 00:05:46.122 00:05:46.122 00:05:46.122 Message: 00:05:46.122 ================= 00:05:46.122 Content Skipped 00:05:46.122 ================= 00:05:46.122 00:05:46.122 apps: 00:05:46.122 dumpcap: explicitly disabled via build config 00:05:46.122 graph: explicitly disabled via build config 00:05:46.122 pdump: explicitly disabled via build config 00:05:46.122 proc-info: explicitly disabled via build config 00:05:46.122 test-acl: explicitly disabled via build config 00:05:46.122 test-bbdev: explicitly disabled via build config 00:05:46.122 test-cmdline: explicitly disabled via build config 00:05:46.122 test-compress-perf: explicitly disabled via build config 00:05:46.122 test-crypto-perf: explicitly disabled via build config 00:05:46.122 test-dma-perf: explicitly disabled via build config 00:05:46.122 test-eventdev: explicitly disabled via build config 00:05:46.122 test-fib: explicitly disabled via build config 00:05:46.122 test-flow-perf: explicitly disabled via build config 00:05:46.122 test-gpudev: explicitly disabled via build config 00:05:46.122 test-mldev: explicitly disabled via build config 00:05:46.122 test-pipeline: explicitly disabled via build config 00:05:46.122 test-pmd: explicitly disabled via build config 00:05:46.122 test-regex: explicitly disabled via build config 00:05:46.122 test-sad: explicitly disabled via build config 00:05:46.122 test-security-perf: explicitly disabled via build config 00:05:46.122 00:05:46.122 libs: 00:05:46.122 argparse: explicitly disabled via build config 00:05:46.122 metrics: explicitly disabled via build config 00:05:46.122 acl: explicitly disabled via build config 00:05:46.122 bbdev: explicitly disabled via build config 00:05:46.122 bitratestats: explicitly disabled via build config 00:05:46.122 bpf: explicitly disabled via build config 00:05:46.122 cfgfile: explicitly disabled via build config 00:05:46.122 distributor: explicitly disabled via build config 00:05:46.122 efd: explicitly disabled via build config 00:05:46.122 eventdev: explicitly disabled via build config 00:05:46.122 dispatcher: explicitly disabled via build config 00:05:46.122 gpudev: explicitly disabled via build config 00:05:46.122 gro: explicitly disabled via build config 00:05:46.122 gso: explicitly disabled via build config 00:05:46.122 ip_frag: explicitly disabled via build config 00:05:46.122 jobstats: explicitly disabled via build config 00:05:46.122 latencystats: explicitly disabled via build config 00:05:46.122 lpm: explicitly disabled via build config 00:05:46.122 member: explicitly disabled via build config 00:05:46.122 pcapng: explicitly disabled via build config 00:05:46.122 rawdev: explicitly disabled via build config 00:05:46.122 regexdev: explicitly disabled via build config 00:05:46.122 mldev: explicitly disabled via build config 00:05:46.122 rib: explicitly disabled via build config 00:05:46.122 sched: explicitly disabled via build config 00:05:46.122 stack: explicitly disabled via build config 00:05:46.122 ipsec: explicitly disabled via build config 00:05:46.122 pdcp: explicitly disabled via build config 00:05:46.122 fib: explicitly disabled via build config 00:05:46.122 port: explicitly disabled via build config 00:05:46.122 pdump: explicitly disabled via build config 00:05:46.122 table: explicitly disabled via build config 00:05:46.122 pipeline: explicitly disabled via build config 00:05:46.122 graph: explicitly disabled via build config 00:05:46.122 node: explicitly disabled via build config 00:05:46.122 00:05:46.122 drivers: 00:05:46.122 common/cpt: not in enabled drivers build config 00:05:46.122 common/dpaax: not in enabled drivers build config 00:05:46.122 common/iavf: not in enabled drivers build config 00:05:46.122 common/idpf: not in enabled drivers build config 00:05:46.122 common/ionic: not in enabled drivers build config 00:05:46.122 common/mvep: not in enabled drivers build config 00:05:46.122 common/octeontx: not in enabled drivers build config 00:05:46.122 bus/auxiliary: not in enabled drivers build config 00:05:46.122 bus/cdx: not in enabled drivers build config 00:05:46.122 bus/dpaa: not in enabled drivers build config 00:05:46.122 bus/fslmc: not in enabled drivers build config 00:05:46.122 bus/ifpga: not in enabled drivers build config 00:05:46.122 bus/platform: not in enabled drivers build config 00:05:46.122 bus/uacce: not in enabled drivers build config 00:05:46.122 bus/vmbus: not in enabled drivers build config 00:05:46.122 common/cnxk: not in enabled drivers build config 00:05:46.122 common/mlx5: not in enabled drivers build config 00:05:46.122 common/nfp: not in enabled drivers build config 00:05:46.122 common/nitrox: not in enabled drivers build config 00:05:46.122 common/qat: not in enabled drivers build config 00:05:46.122 common/sfc_efx: not in enabled drivers build config 00:05:46.122 mempool/bucket: not in enabled drivers build config 00:05:46.122 mempool/cnxk: not in enabled drivers build config 00:05:46.122 mempool/dpaa: not in enabled drivers build config 00:05:46.122 mempool/dpaa2: not in enabled drivers build config 00:05:46.122 mempool/octeontx: not in enabled drivers build config 00:05:46.122 mempool/stack: not in enabled drivers build config 00:05:46.122 dma/cnxk: not in enabled drivers build config 00:05:46.122 dma/dpaa: not in enabled drivers build config 00:05:46.122 dma/dpaa2: not in enabled drivers build config 00:05:46.122 dma/hisilicon: not in enabled drivers build config 00:05:46.122 dma/idxd: not in enabled drivers build config 00:05:46.122 dma/ioat: not in enabled drivers build config 00:05:46.122 dma/skeleton: not in enabled drivers build config 00:05:46.122 net/af_packet: not in enabled drivers build config 00:05:46.122 net/af_xdp: not in enabled drivers build config 00:05:46.122 net/ark: not in enabled drivers build config 00:05:46.122 net/atlantic: not in enabled drivers build config 00:05:46.122 net/avp: not in enabled drivers build config 00:05:46.122 net/axgbe: not in enabled drivers build config 00:05:46.122 net/bnx2x: not in enabled drivers build config 00:05:46.122 net/bnxt: not in enabled drivers build config 00:05:46.122 net/bonding: not in enabled drivers build config 00:05:46.122 net/cnxk: not in enabled drivers build config 00:05:46.122 net/cpfl: not in enabled drivers build config 00:05:46.122 net/cxgbe: not in enabled drivers build config 00:05:46.122 net/dpaa: not in enabled drivers build config 00:05:46.122 net/dpaa2: not in enabled drivers build config 00:05:46.122 net/e1000: not in enabled drivers build config 00:05:46.122 net/ena: not in enabled drivers build config 00:05:46.122 net/enetc: not in enabled drivers build config 00:05:46.122 net/enetfec: not in enabled drivers build config 00:05:46.122 net/enic: not in enabled drivers build config 00:05:46.122 net/failsafe: not in enabled drivers build config 00:05:46.122 net/fm10k: not in enabled drivers build config 00:05:46.122 net/gve: not in enabled drivers build config 00:05:46.122 net/hinic: not in enabled drivers build config 00:05:46.122 net/hns3: not in enabled drivers build config 00:05:46.122 net/i40e: not in enabled drivers build config 00:05:46.122 net/iavf: not in enabled drivers build config 00:05:46.122 net/ice: not in enabled drivers build config 00:05:46.122 net/idpf: not in enabled drivers build config 00:05:46.122 net/igc: not in enabled drivers build config 00:05:46.122 net/ionic: not in enabled drivers build config 00:05:46.122 net/ipn3ke: not in enabled drivers build config 00:05:46.122 net/ixgbe: not in enabled drivers build config 00:05:46.122 net/mana: not in enabled drivers build config 00:05:46.122 net/memif: not in enabled drivers build config 00:05:46.122 net/mlx4: not in enabled drivers build config 00:05:46.122 net/mlx5: not in enabled drivers build config 00:05:46.122 net/mvneta: not in enabled drivers build config 00:05:46.122 net/mvpp2: not in enabled drivers build config 00:05:46.122 net/netvsc: not in enabled drivers build config 00:05:46.122 net/nfb: not in enabled drivers build config 00:05:46.122 net/nfp: not in enabled drivers build config 00:05:46.122 net/ngbe: not in enabled drivers build config 00:05:46.122 net/null: not in enabled drivers build config 00:05:46.122 net/octeontx: not in enabled drivers build config 00:05:46.122 net/octeon_ep: not in enabled drivers build config 00:05:46.123 net/pcap: not in enabled drivers build config 00:05:46.123 net/pfe: not in enabled drivers build config 00:05:46.123 net/qede: not in enabled drivers build config 00:05:46.123 net/ring: not in enabled drivers build config 00:05:46.123 net/sfc: not in enabled drivers build config 00:05:46.123 net/softnic: not in enabled drivers build config 00:05:46.123 net/tap: not in enabled drivers build config 00:05:46.123 net/thunderx: not in enabled drivers build config 00:05:46.123 net/txgbe: not in enabled drivers build config 00:05:46.123 net/vdev_netvsc: not in enabled drivers build config 00:05:46.123 net/vhost: not in enabled drivers build config 00:05:46.123 net/virtio: not in enabled drivers build config 00:05:46.123 net/vmxnet3: not in enabled drivers build config 00:05:46.123 raw/*: missing internal dependency, "rawdev" 00:05:46.123 crypto/armv8: not in enabled drivers build config 00:05:46.123 crypto/bcmfs: not in enabled drivers build config 00:05:46.123 crypto/caam_jr: not in enabled drivers build config 00:05:46.123 crypto/ccp: not in enabled drivers build config 00:05:46.123 crypto/cnxk: not in enabled drivers build config 00:05:46.123 crypto/dpaa_sec: not in enabled drivers build config 00:05:46.123 crypto/dpaa2_sec: not in enabled drivers build config 00:05:46.123 crypto/ipsec_mb: not in enabled drivers build config 00:05:46.123 crypto/mlx5: not in enabled drivers build config 00:05:46.123 crypto/mvsam: not in enabled drivers build config 00:05:46.123 crypto/nitrox: not in enabled drivers build config 00:05:46.123 crypto/null: not in enabled drivers build config 00:05:46.123 crypto/octeontx: not in enabled drivers build config 00:05:46.123 crypto/openssl: not in enabled drivers build config 00:05:46.123 crypto/scheduler: not in enabled drivers build config 00:05:46.123 crypto/uadk: not in enabled drivers build config 00:05:46.123 crypto/virtio: not in enabled drivers build config 00:05:46.123 compress/isal: not in enabled drivers build config 00:05:46.123 compress/mlx5: not in enabled drivers build config 00:05:46.123 compress/nitrox: not in enabled drivers build config 00:05:46.123 compress/octeontx: not in enabled drivers build config 00:05:46.123 compress/zlib: not in enabled drivers build config 00:05:46.123 regex/*: missing internal dependency, "regexdev" 00:05:46.123 ml/*: missing internal dependency, "mldev" 00:05:46.123 vdpa/ifc: not in enabled drivers build config 00:05:46.123 vdpa/mlx5: not in enabled drivers build config 00:05:46.123 vdpa/nfp: not in enabled drivers build config 00:05:46.123 vdpa/sfc: not in enabled drivers build config 00:05:46.123 event/*: missing internal dependency, "eventdev" 00:05:46.123 baseband/*: missing internal dependency, "bbdev" 00:05:46.123 gpu/*: missing internal dependency, "gpudev" 00:05:46.123 00:05:46.123 00:05:46.123 Build targets in project: 85 00:05:46.123 00:05:46.123 DPDK 24.03.0 00:05:46.123 00:05:46.123 User defined options 00:05:46.123 buildtype : debug 00:05:46.123 default_library : shared 00:05:46.123 libdir : lib 00:05:46.123 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:46.123 b_sanitize : address 00:05:46.123 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:46.123 c_link_args : 00:05:46.123 cpu_instruction_set: native 00:05:46.123 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:46.123 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:46.123 enable_docs : false 00:05:46.123 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:46.123 enable_kmods : false 00:05:46.123 max_lcores : 128 00:05:46.123 tests : false 00:05:46.123 00:05:46.123 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:46.123 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:46.123 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:46.123 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:46.123 [3/268] Linking static target lib/librte_kvargs.a 00:05:46.123 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:46.123 [5/268] Linking static target lib/librte_log.a 00:05:46.123 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:46.382 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:46.382 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:46.382 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:46.382 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:46.382 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:46.382 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:46.382 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.641 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:46.641 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:46.641 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:46.641 [17/268] Linking static target lib/librte_telemetry.a 00:05:46.641 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:46.899 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:46.899 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:46.899 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:46.899 [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.157 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:47.157 [24/268] Linking target lib/librte_log.so.24.1 00:05:47.157 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:47.157 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:47.157 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:47.415 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:47.415 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:47.415 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:47.415 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:47.415 [32/268] Linking target lib/librte_kvargs.so.24.1 00:05:47.415 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:47.415 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.675 [35/268] Linking target lib/librte_telemetry.so.24.1 00:05:47.675 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:47.675 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:47.675 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:47.675 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:47.675 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:47.675 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:47.675 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:47.934 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:47.934 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:47.934 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:47.934 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:48.193 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:48.193 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:48.193 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:48.453 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:48.453 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:48.453 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:48.453 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:48.453 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:48.711 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:48.711 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:48.711 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:48.711 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:48.969 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:48.969 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:48.969 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:48.970 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:48.970 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:48.970 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:49.228 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:49.228 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:49.228 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:49.228 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:49.487 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:49.487 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:49.746 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:49.746 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:49.746 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:49.746 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:49.746 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:49.746 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:49.746 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:49.746 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:50.006 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:50.006 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:50.006 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:50.006 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:50.006 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:50.264 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:50.265 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:50.265 [86/268] Linking static target lib/librte_eal.a 00:05:50.265 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:50.265 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:50.524 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:50.524 [90/268] Linking static target lib/librte_ring.a 00:05:50.524 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:50.524 [92/268] Linking static target lib/librte_mempool.a 00:05:50.783 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:50.783 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:50.783 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:50.783 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:50.783 [97/268] Linking static target lib/librte_rcu.a 00:05:51.042 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:51.042 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:51.042 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.042 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:51.042 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:51.042 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:51.042 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:51.302 [105/268] Linking static target lib/librte_mbuf.a 00:05:51.302 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:51.302 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:51.302 [108/268] Linking static target lib/librte_net.a 00:05:51.302 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:51.560 [110/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.560 [111/268] Linking static target lib/librte_meter.a 00:05:51.819 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:51.819 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:51.819 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:51.819 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.819 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.819 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.819 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:52.078 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:52.336 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:52.336 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.336 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:52.598 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:52.598 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:52.598 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:52.598 [126/268] Linking static target lib/librte_pci.a 00:05:52.857 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:52.857 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:53.116 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:53.116 [130/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.116 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:53.116 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:53.116 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:53.116 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:53.116 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:53.116 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:53.116 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:53.116 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:53.116 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:53.376 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:53.376 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:53.376 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:53.376 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:53.376 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:53.376 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:53.376 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:53.376 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:53.635 [148/268] Linking static target lib/librte_cmdline.a 00:05:53.635 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:53.895 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:53.895 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:53.895 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:53.895 [153/268] Linking static target lib/librte_timer.a 00:05:53.895 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:54.154 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:54.154 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:54.154 [157/268] Linking static target lib/librte_compressdev.a 00:05:54.413 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:54.672 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.672 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:54.672 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:54.672 [162/268] Linking static target lib/librte_hash.a 00:05:54.672 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:54.672 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:54.672 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:54.931 [166/268] Linking static target lib/librte_dmadev.a 00:05:54.931 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:55.190 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:55.190 [169/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:55.190 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:55.190 [171/268] Linking static target lib/librte_ethdev.a 00:05:55.190 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:55.190 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.190 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.449 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:55.707 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:55.707 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:55.707 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:55.707 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.707 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:55.707 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:55.707 [182/268] Linking static target lib/librte_cryptodev.a 00:05:55.965 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:55.965 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.965 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:56.225 [186/268] Linking static target lib/librte_power.a 00:05:56.225 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:56.484 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:56.484 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:56.484 [190/268] Linking static target lib/librte_reorder.a 00:05:56.484 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:56.743 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:56.743 [193/268] Linking static target lib/librte_security.a 00:05:57.001 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.001 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:57.260 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.519 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.519 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:57.519 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:57.779 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:57.779 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:58.038 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:58.038 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:58.038 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:58.298 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:58.298 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:58.298 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:58.298 [208/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.298 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:58.557 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:58.557 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:58.557 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:58.557 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:58.557 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:58.557 [215/268] Linking static target drivers/librte_bus_pci.a 00:05:58.816 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:58.816 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:58.816 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:58.816 [219/268] Linking static target drivers/librte_bus_vdev.a 00:05:58.816 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:58.816 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:59.075 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:59.075 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:59.075 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:59.075 [225/268] Linking static target drivers/librte_mempool_ring.a 00:05:59.076 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.335 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.903 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:03.192 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.192 [230/268] Linking target lib/librte_eal.so.24.1 00:06:03.192 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:03.192 [232/268] Linking target lib/librte_timer.so.24.1 00:06:03.192 [233/268] Linking target lib/librte_pci.so.24.1 00:06:03.192 [234/268] Linking target lib/librte_ring.so.24.1 00:06:03.192 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:03.192 [236/268] Linking target lib/librte_meter.so.24.1 00:06:03.192 [237/268] Linking target lib/librte_dmadev.so.24.1 00:06:03.192 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:03.192 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:03.192 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:03.192 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:03.192 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:03.192 [243/268] Linking target lib/librte_mempool.so.24.1 00:06:03.192 [244/268] Linking target lib/librte_rcu.so.24.1 00:06:03.192 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:03.451 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:03.451 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:03.451 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:03.451 [249/268] Linking target lib/librte_mbuf.so.24.1 00:06:03.708 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:03.708 [251/268] Linking target lib/librte_net.so.24.1 00:06:03.708 [252/268] Linking target lib/librte_reorder.so.24.1 00:06:03.708 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:06:03.708 [254/268] Linking target lib/librte_compressdev.so.24.1 00:06:03.708 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:03.708 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:03.708 [257/268] Linking target lib/librte_cmdline.so.24.1 00:06:03.967 [258/268] Linking target lib/librte_security.so.24.1 00:06:03.967 [259/268] Linking target lib/librte_hash.so.24.1 00:06:03.967 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:04.225 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:04.225 [262/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.225 [263/268] Linking static target lib/librte_vhost.a 00:06:04.225 [264/268] Linking target lib/librte_ethdev.so.24.1 00:06:04.484 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:04.484 [266/268] Linking target lib/librte_power.so.24.1 00:06:07.017 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.017 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:07.017 INFO: autodetecting backend as ninja 00:06:07.017 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:25.118 CC lib/log/log.o 00:06:25.118 CC lib/log/log_deprecated.o 00:06:25.118 CC lib/log/log_flags.o 00:06:25.118 CC lib/ut/ut.o 00:06:25.118 CC lib/ut_mock/mock.o 00:06:25.118 LIB libspdk_ut.a 00:06:25.118 LIB libspdk_ut_mock.a 00:06:25.118 LIB libspdk_log.a 00:06:25.118 SO libspdk_ut.so.2.0 00:06:25.118 SO libspdk_ut_mock.so.6.0 00:06:25.118 SO libspdk_log.so.7.1 00:06:25.118 SYMLINK libspdk_ut.so 00:06:25.118 SYMLINK libspdk_log.so 00:06:25.118 SYMLINK libspdk_ut_mock.so 00:06:25.118 CC lib/dma/dma.o 00:06:25.118 CC lib/ioat/ioat.o 00:06:25.118 CXX lib/trace_parser/trace.o 00:06:25.118 CC lib/util/crc16.o 00:06:25.118 CC lib/util/bit_array.o 00:06:25.118 CC lib/util/base64.o 00:06:25.118 CC lib/util/crc32c.o 00:06:25.118 CC lib/util/cpuset.o 00:06:25.118 CC lib/util/crc32.o 00:06:25.118 CC lib/vfio_user/host/vfio_user_pci.o 00:06:25.118 CC lib/util/crc32_ieee.o 00:06:25.118 CC lib/util/crc64.o 00:06:25.118 CC lib/util/dif.o 00:06:25.118 LIB libspdk_dma.a 00:06:25.118 CC lib/vfio_user/host/vfio_user.o 00:06:25.118 SO libspdk_dma.so.5.0 00:06:25.118 CC lib/util/fd.o 00:06:25.118 CC lib/util/fd_group.o 00:06:25.118 CC lib/util/file.o 00:06:25.118 CC lib/util/hexlify.o 00:06:25.118 LIB libspdk_ioat.a 00:06:25.118 SYMLINK libspdk_dma.so 00:06:25.118 CC lib/util/iov.o 00:06:25.118 SO libspdk_ioat.so.7.0 00:06:25.118 SYMLINK libspdk_ioat.so 00:06:25.118 CC lib/util/math.o 00:06:25.118 CC lib/util/net.o 00:06:25.118 CC lib/util/pipe.o 00:06:25.118 LIB libspdk_vfio_user.a 00:06:25.118 CC lib/util/strerror_tls.o 00:06:25.118 CC lib/util/string.o 00:06:25.118 SO libspdk_vfio_user.so.5.0 00:06:25.118 CC lib/util/uuid.o 00:06:25.118 SYMLINK libspdk_vfio_user.so 00:06:25.118 CC lib/util/xor.o 00:06:25.118 CC lib/util/zipf.o 00:06:25.118 CC lib/util/md5.o 00:06:25.118 LIB libspdk_util.a 00:06:25.118 SO libspdk_util.so.10.1 00:06:25.118 LIB libspdk_trace_parser.a 00:06:25.118 SO libspdk_trace_parser.so.6.0 00:06:25.118 SYMLINK libspdk_util.so 00:06:25.118 SYMLINK libspdk_trace_parser.so 00:06:25.118 CC lib/rdma_utils/rdma_utils.o 00:06:25.118 CC lib/env_dpdk/env.o 00:06:25.118 CC lib/idxd/idxd_user.o 00:06:25.118 CC lib/idxd/idxd.o 00:06:25.118 CC lib/conf/conf.o 00:06:25.118 CC lib/idxd/idxd_kernel.o 00:06:25.118 CC lib/env_dpdk/memory.o 00:06:25.118 CC lib/env_dpdk/pci.o 00:06:25.118 CC lib/vmd/vmd.o 00:06:25.118 CC lib/json/json_parse.o 00:06:25.118 CC lib/json/json_util.o 00:06:25.380 LIB libspdk_conf.a 00:06:25.380 CC lib/json/json_write.o 00:06:25.380 CC lib/vmd/led.o 00:06:25.380 SO libspdk_conf.so.6.0 00:06:25.380 LIB libspdk_rdma_utils.a 00:06:25.380 SO libspdk_rdma_utils.so.1.0 00:06:25.380 SYMLINK libspdk_conf.so 00:06:25.380 CC lib/env_dpdk/init.o 00:06:25.380 SYMLINK libspdk_rdma_utils.so 00:06:25.380 CC lib/env_dpdk/threads.o 00:06:25.640 CC lib/env_dpdk/pci_ioat.o 00:06:25.640 CC lib/env_dpdk/pci_virtio.o 00:06:25.640 CC lib/env_dpdk/pci_vmd.o 00:06:25.640 CC lib/env_dpdk/pci_idxd.o 00:06:25.640 LIB libspdk_json.a 00:06:25.640 CC lib/env_dpdk/pci_event.o 00:06:25.640 CC lib/rdma_provider/common.o 00:06:25.640 SO libspdk_json.so.6.0 00:06:25.899 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:25.899 SYMLINK libspdk_json.so 00:06:25.899 CC lib/env_dpdk/sigbus_handler.o 00:06:25.899 CC lib/env_dpdk/pci_dpdk.o 00:06:25.899 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:25.899 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:25.899 LIB libspdk_vmd.a 00:06:25.899 LIB libspdk_idxd.a 00:06:25.899 SO libspdk_vmd.so.6.0 00:06:25.899 SO libspdk_idxd.so.12.1 00:06:25.899 SYMLINK libspdk_vmd.so 00:06:25.899 LIB libspdk_rdma_provider.a 00:06:26.158 SO libspdk_rdma_provider.so.7.0 00:06:26.158 SYMLINK libspdk_idxd.so 00:06:26.158 CC lib/jsonrpc/jsonrpc_server.o 00:06:26.158 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:26.158 CC lib/jsonrpc/jsonrpc_client.o 00:06:26.158 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:26.158 SYMLINK libspdk_rdma_provider.so 00:06:26.417 LIB libspdk_jsonrpc.a 00:06:26.417 SO libspdk_jsonrpc.so.6.0 00:06:26.676 SYMLINK libspdk_jsonrpc.so 00:06:26.935 CC lib/rpc/rpc.o 00:06:26.935 LIB libspdk_env_dpdk.a 00:06:27.194 SO libspdk_env_dpdk.so.15.1 00:06:27.195 LIB libspdk_rpc.a 00:06:27.195 SO libspdk_rpc.so.6.0 00:06:27.195 SYMLINK libspdk_env_dpdk.so 00:06:27.195 SYMLINK libspdk_rpc.so 00:06:27.764 CC lib/notify/notify_rpc.o 00:06:27.764 CC lib/notify/notify.o 00:06:27.764 CC lib/trace/trace_flags.o 00:06:27.764 CC lib/trace/trace.o 00:06:27.764 CC lib/trace/trace_rpc.o 00:06:27.764 CC lib/keyring/keyring.o 00:06:27.764 CC lib/keyring/keyring_rpc.o 00:06:27.764 LIB libspdk_notify.a 00:06:27.764 SO libspdk_notify.so.6.0 00:06:28.023 LIB libspdk_keyring.a 00:06:28.023 LIB libspdk_trace.a 00:06:28.023 SO libspdk_keyring.so.2.0 00:06:28.023 SYMLINK libspdk_notify.so 00:06:28.023 SO libspdk_trace.so.11.0 00:06:28.023 SYMLINK libspdk_keyring.so 00:06:28.023 SYMLINK libspdk_trace.so 00:06:28.282 CC lib/sock/sock.o 00:06:28.282 CC lib/sock/sock_rpc.o 00:06:28.282 CC lib/thread/thread.o 00:06:28.282 CC lib/thread/iobuf.o 00:06:28.850 LIB libspdk_sock.a 00:06:28.850 SO libspdk_sock.so.10.0 00:06:29.109 SYMLINK libspdk_sock.so 00:06:29.367 CC lib/nvme/nvme_ctrlr.o 00:06:29.367 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:29.367 CC lib/nvme/nvme_fabric.o 00:06:29.367 CC lib/nvme/nvme_ns.o 00:06:29.367 CC lib/nvme/nvme_ns_cmd.o 00:06:29.367 CC lib/nvme/nvme_pcie_common.o 00:06:29.367 CC lib/nvme/nvme_pcie.o 00:06:29.367 CC lib/nvme/nvme.o 00:06:29.367 CC lib/nvme/nvme_qpair.o 00:06:30.304 LIB libspdk_thread.a 00:06:30.304 CC lib/nvme/nvme_quirks.o 00:06:30.304 SO libspdk_thread.so.11.0 00:06:30.304 CC lib/nvme/nvme_transport.o 00:06:30.304 SYMLINK libspdk_thread.so 00:06:30.304 CC lib/nvme/nvme_discovery.o 00:06:30.304 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:30.304 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:30.563 CC lib/nvme/nvme_tcp.o 00:06:30.563 CC lib/nvme/nvme_opal.o 00:06:30.821 CC lib/nvme/nvme_io_msg.o 00:06:30.821 CC lib/accel/accel.o 00:06:30.821 CC lib/blob/blobstore.o 00:06:30.821 CC lib/nvme/nvme_poll_group.o 00:06:31.080 CC lib/init/json_config.o 00:06:31.080 CC lib/init/subsystem.o 00:06:31.080 CC lib/blob/request.o 00:06:31.080 CC lib/virtio/virtio.o 00:06:31.339 CC lib/init/subsystem_rpc.o 00:06:31.339 CC lib/init/rpc.o 00:06:31.339 CC lib/accel/accel_rpc.o 00:06:31.339 CC lib/accel/accel_sw.o 00:06:31.597 LIB libspdk_init.a 00:06:31.598 CC lib/virtio/virtio_vhost_user.o 00:06:31.598 CC lib/blob/zeroes.o 00:06:31.598 SO libspdk_init.so.6.0 00:06:31.598 CC lib/blob/blob_bs_dev.o 00:06:31.598 CC lib/nvme/nvme_zns.o 00:06:31.598 SYMLINK libspdk_init.so 00:06:31.598 CC lib/nvme/nvme_stubs.o 00:06:31.856 CC lib/virtio/virtio_vfio_user.o 00:06:31.856 CC lib/virtio/virtio_pci.o 00:06:31.856 CC lib/nvme/nvme_auth.o 00:06:32.114 CC lib/fsdev/fsdev.o 00:06:32.114 CC lib/fsdev/fsdev_io.o 00:06:32.114 LIB libspdk_accel.a 00:06:32.114 LIB libspdk_virtio.a 00:06:32.114 SO libspdk_accel.so.16.0 00:06:32.114 SO libspdk_virtio.so.7.0 00:06:32.114 CC lib/fsdev/fsdev_rpc.o 00:06:32.114 SYMLINK libspdk_accel.so 00:06:32.114 CC lib/nvme/nvme_cuse.o 00:06:32.372 CC lib/nvme/nvme_rdma.o 00:06:32.372 SYMLINK libspdk_virtio.so 00:06:32.372 CC lib/event/app.o 00:06:32.372 CC lib/event/reactor.o 00:06:32.630 CC lib/event/log_rpc.o 00:06:32.630 CC lib/bdev/bdev.o 00:06:32.630 CC lib/bdev/bdev_rpc.o 00:06:32.630 CC lib/event/app_rpc.o 00:06:32.888 LIB libspdk_fsdev.a 00:06:32.888 CC lib/event/scheduler_static.o 00:06:32.888 SO libspdk_fsdev.so.2.0 00:06:32.888 CC lib/bdev/bdev_zone.o 00:06:32.888 SYMLINK libspdk_fsdev.so 00:06:33.146 CC lib/bdev/part.o 00:06:33.146 CC lib/bdev/scsi_nvme.o 00:06:33.146 LIB libspdk_event.a 00:06:33.146 SO libspdk_event.so.14.0 00:06:33.146 SYMLINK libspdk_event.so 00:06:33.404 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:33.971 LIB libspdk_nvme.a 00:06:33.972 SO libspdk_nvme.so.15.0 00:06:34.229 LIB libspdk_fuse_dispatcher.a 00:06:34.229 SO libspdk_fuse_dispatcher.so.1.0 00:06:34.229 SYMLINK libspdk_fuse_dispatcher.so 00:06:34.488 SYMLINK libspdk_nvme.so 00:06:35.075 LIB libspdk_blob.a 00:06:35.075 SO libspdk_blob.so.12.0 00:06:35.333 SYMLINK libspdk_blob.so 00:06:35.590 CC lib/blobfs/blobfs.o 00:06:35.590 CC lib/blobfs/tree.o 00:06:35.590 CC lib/lvol/lvol.o 00:06:36.158 LIB libspdk_bdev.a 00:06:36.158 SO libspdk_bdev.so.17.0 00:06:36.416 SYMLINK libspdk_bdev.so 00:06:36.675 CC lib/nbd/nbd.o 00:06:36.675 CC lib/ftl/ftl_core.o 00:06:36.675 CC lib/ftl/ftl_init.o 00:06:36.675 CC lib/nbd/nbd_rpc.o 00:06:36.675 CC lib/ftl/ftl_layout.o 00:06:36.675 CC lib/scsi/dev.o 00:06:36.675 CC lib/ublk/ublk.o 00:06:36.675 CC lib/nvmf/ctrlr.o 00:06:36.675 LIB libspdk_blobfs.a 00:06:36.675 LIB libspdk_lvol.a 00:06:36.675 SO libspdk_blobfs.so.11.0 00:06:36.675 SO libspdk_lvol.so.11.0 00:06:36.934 CC lib/ftl/ftl_debug.o 00:06:36.934 SYMLINK libspdk_blobfs.so 00:06:36.934 SYMLINK libspdk_lvol.so 00:06:36.934 CC lib/ftl/ftl_io.o 00:06:36.934 CC lib/scsi/lun.o 00:06:36.934 CC lib/ublk/ublk_rpc.o 00:06:36.934 CC lib/ftl/ftl_sb.o 00:06:36.934 CC lib/ftl/ftl_l2p.o 00:06:37.194 CC lib/ftl/ftl_l2p_flat.o 00:06:37.194 CC lib/ftl/ftl_nv_cache.o 00:06:37.194 CC lib/ftl/ftl_band.o 00:06:37.194 LIB libspdk_nbd.a 00:06:37.194 SO libspdk_nbd.so.7.0 00:06:37.194 CC lib/ftl/ftl_band_ops.o 00:06:37.194 CC lib/scsi/port.o 00:06:37.194 SYMLINK libspdk_nbd.so 00:06:37.194 CC lib/ftl/ftl_writer.o 00:06:37.194 CC lib/nvmf/ctrlr_discovery.o 00:06:37.194 CC lib/ftl/ftl_rq.o 00:06:37.194 CC lib/ftl/ftl_reloc.o 00:06:37.452 CC lib/scsi/scsi.o 00:06:37.452 LIB libspdk_ublk.a 00:06:37.452 SO libspdk_ublk.so.3.0 00:06:37.452 CC lib/ftl/ftl_l2p_cache.o 00:06:37.452 SYMLINK libspdk_ublk.so 00:06:37.452 CC lib/ftl/ftl_p2l.o 00:06:37.452 CC lib/ftl/ftl_p2l_log.o 00:06:37.452 CC lib/scsi/scsi_bdev.o 00:06:37.452 CC lib/ftl/mngt/ftl_mngt.o 00:06:37.710 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:37.710 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:37.710 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:37.968 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:37.968 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:37.968 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:37.968 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:37.968 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:37.968 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:37.968 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:37.968 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:38.225 CC lib/scsi/scsi_pr.o 00:06:38.225 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:38.225 CC lib/scsi/scsi_rpc.o 00:06:38.225 CC lib/ftl/utils/ftl_conf.o 00:06:38.225 CC lib/ftl/utils/ftl_md.o 00:06:38.225 CC lib/ftl/utils/ftl_mempool.o 00:06:38.225 CC lib/scsi/task.o 00:06:38.225 CC lib/ftl/utils/ftl_bitmap.o 00:06:38.483 CC lib/ftl/utils/ftl_property.o 00:06:38.483 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:38.483 CC lib/nvmf/ctrlr_bdev.o 00:06:38.483 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:38.483 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:38.483 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:38.483 LIB libspdk_scsi.a 00:06:38.483 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:38.483 CC lib/nvmf/subsystem.o 00:06:38.741 SO libspdk_scsi.so.9.0 00:06:38.741 CC lib/nvmf/nvmf.o 00:06:38.741 SYMLINK libspdk_scsi.so 00:06:38.741 CC lib/nvmf/nvmf_rpc.o 00:06:38.741 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:38.741 CC lib/nvmf/transport.o 00:06:38.741 CC lib/nvmf/tcp.o 00:06:38.741 CC lib/nvmf/stubs.o 00:06:38.741 CC lib/nvmf/mdns_server.o 00:06:38.741 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:38.999 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:38.999 CC lib/iscsi/conn.o 00:06:39.258 CC lib/iscsi/init_grp.o 00:06:39.258 CC lib/iscsi/iscsi.o 00:06:39.258 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:39.517 CC lib/nvmf/rdma.o 00:06:39.517 CC lib/nvmf/auth.o 00:06:39.517 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:39.517 CC lib/iscsi/param.o 00:06:39.777 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:39.777 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:39.777 CC lib/iscsi/portal_grp.o 00:06:39.777 CC lib/vhost/vhost.o 00:06:40.035 CC lib/vhost/vhost_rpc.o 00:06:40.035 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:40.035 CC lib/ftl/base/ftl_base_dev.o 00:06:40.035 CC lib/ftl/base/ftl_base_bdev.o 00:06:40.035 CC lib/ftl/ftl_trace.o 00:06:40.300 CC lib/vhost/vhost_scsi.o 00:06:40.300 CC lib/iscsi/tgt_node.o 00:06:40.300 CC lib/iscsi/iscsi_subsystem.o 00:06:40.300 LIB libspdk_ftl.a 00:06:40.300 CC lib/iscsi/iscsi_rpc.o 00:06:40.598 CC lib/iscsi/task.o 00:06:40.598 CC lib/vhost/vhost_blk.o 00:06:40.598 SO libspdk_ftl.so.9.0 00:06:40.856 CC lib/vhost/rte_vhost_user.o 00:06:40.856 LIB libspdk_iscsi.a 00:06:41.116 SYMLINK libspdk_ftl.so 00:06:41.116 SO libspdk_iscsi.so.8.0 00:06:41.375 SYMLINK libspdk_iscsi.so 00:06:41.945 LIB libspdk_vhost.a 00:06:41.945 SO libspdk_vhost.so.8.0 00:06:41.945 LIB libspdk_nvmf.a 00:06:41.945 SYMLINK libspdk_vhost.so 00:06:42.204 SO libspdk_nvmf.so.20.0 00:06:42.463 SYMLINK libspdk_nvmf.so 00:06:42.727 CC module/env_dpdk/env_dpdk_rpc.o 00:06:42.988 CC module/keyring/file/keyring.o 00:06:42.988 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:42.988 CC module/sock/posix/posix.o 00:06:42.988 CC module/accel/error/accel_error.o 00:06:42.988 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:42.988 CC module/fsdev/aio/fsdev_aio.o 00:06:42.988 CC module/accel/ioat/accel_ioat.o 00:06:42.988 CC module/keyring/linux/keyring.o 00:06:42.988 CC module/blob/bdev/blob_bdev.o 00:06:42.989 LIB libspdk_env_dpdk_rpc.a 00:06:42.989 SO libspdk_env_dpdk_rpc.so.6.0 00:06:42.989 SYMLINK libspdk_env_dpdk_rpc.so 00:06:42.989 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:42.989 CC module/keyring/file/keyring_rpc.o 00:06:42.989 CC module/keyring/linux/keyring_rpc.o 00:06:42.989 LIB libspdk_scheduler_dpdk_governor.a 00:06:42.989 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:42.989 LIB libspdk_scheduler_dynamic.a 00:06:42.989 CC module/accel/ioat/accel_ioat_rpc.o 00:06:42.989 CC module/accel/error/accel_error_rpc.o 00:06:42.989 SO libspdk_scheduler_dynamic.so.4.0 00:06:43.247 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:43.247 SYMLINK libspdk_scheduler_dynamic.so 00:06:43.247 CC module/fsdev/aio/linux_aio_mgr.o 00:06:43.247 LIB libspdk_keyring_linux.a 00:06:43.247 LIB libspdk_keyring_file.a 00:06:43.247 LIB libspdk_blob_bdev.a 00:06:43.247 SO libspdk_keyring_linux.so.1.0 00:06:43.247 SO libspdk_keyring_file.so.2.0 00:06:43.247 LIB libspdk_accel_ioat.a 00:06:43.247 SO libspdk_blob_bdev.so.12.0 00:06:43.247 LIB libspdk_accel_error.a 00:06:43.247 SO libspdk_accel_ioat.so.6.0 00:06:43.247 SYMLINK libspdk_keyring_linux.so 00:06:43.247 SO libspdk_accel_error.so.2.0 00:06:43.247 SYMLINK libspdk_keyring_file.so 00:06:43.247 SYMLINK libspdk_blob_bdev.so 00:06:43.247 SYMLINK libspdk_accel_ioat.so 00:06:43.247 SYMLINK libspdk_accel_error.so 00:06:43.247 CC module/scheduler/gscheduler/gscheduler.o 00:06:43.247 CC module/accel/dsa/accel_dsa.o 00:06:43.247 CC module/accel/dsa/accel_dsa_rpc.o 00:06:43.505 CC module/accel/iaa/accel_iaa.o 00:06:43.505 LIB libspdk_scheduler_gscheduler.a 00:06:43.505 SO libspdk_scheduler_gscheduler.so.4.0 00:06:43.505 CC module/bdev/gpt/gpt.o 00:06:43.505 CC module/blobfs/bdev/blobfs_bdev.o 00:06:43.505 CC module/bdev/error/vbdev_error.o 00:06:43.505 CC module/bdev/delay/vbdev_delay.o 00:06:43.505 SYMLINK libspdk_scheduler_gscheduler.so 00:06:43.505 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:43.505 LIB libspdk_fsdev_aio.a 00:06:43.762 CC module/bdev/lvol/vbdev_lvol.o 00:06:43.762 LIB libspdk_accel_dsa.a 00:06:43.762 SO libspdk_fsdev_aio.so.1.0 00:06:43.762 SO libspdk_accel_dsa.so.5.0 00:06:43.762 CC module/accel/iaa/accel_iaa_rpc.o 00:06:43.762 LIB libspdk_sock_posix.a 00:06:43.762 SYMLINK libspdk_fsdev_aio.so 00:06:43.762 SYMLINK libspdk_accel_dsa.so 00:06:43.762 CC module/bdev/error/vbdev_error_rpc.o 00:06:43.762 CC module/bdev/gpt/vbdev_gpt.o 00:06:43.762 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:43.762 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:43.762 SO libspdk_sock_posix.so.6.0 00:06:43.762 LIB libspdk_accel_iaa.a 00:06:43.762 SO libspdk_accel_iaa.so.3.0 00:06:43.762 SYMLINK libspdk_sock_posix.so 00:06:44.020 LIB libspdk_bdev_error.a 00:06:44.020 CC module/bdev/malloc/bdev_malloc.o 00:06:44.020 LIB libspdk_blobfs_bdev.a 00:06:44.020 SYMLINK libspdk_accel_iaa.so 00:06:44.020 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:44.020 SO libspdk_bdev_error.so.6.0 00:06:44.020 SO libspdk_blobfs_bdev.so.6.0 00:06:44.020 LIB libspdk_bdev_delay.a 00:06:44.020 SO libspdk_bdev_delay.so.6.0 00:06:44.020 CC module/bdev/null/bdev_null.o 00:06:44.020 SYMLINK libspdk_bdev_error.so 00:06:44.020 SYMLINK libspdk_blobfs_bdev.so 00:06:44.020 LIB libspdk_bdev_gpt.a 00:06:44.020 CC module/bdev/nvme/bdev_nvme.o 00:06:44.020 SO libspdk_bdev_gpt.so.6.0 00:06:44.020 SYMLINK libspdk_bdev_delay.so 00:06:44.020 CC module/bdev/null/bdev_null_rpc.o 00:06:44.020 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:44.277 SYMLINK libspdk_bdev_gpt.so 00:06:44.277 CC module/bdev/nvme/nvme_rpc.o 00:06:44.277 LIB libspdk_bdev_lvol.a 00:06:44.277 CC module/bdev/raid/bdev_raid.o 00:06:44.277 CC module/bdev/passthru/vbdev_passthru.o 00:06:44.277 SO libspdk_bdev_lvol.so.6.0 00:06:44.277 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:44.277 SYMLINK libspdk_bdev_lvol.so 00:06:44.278 CC module/bdev/raid/bdev_raid_rpc.o 00:06:44.278 LIB libspdk_bdev_malloc.a 00:06:44.278 CC module/bdev/split/vbdev_split.o 00:06:44.278 LIB libspdk_bdev_null.a 00:06:44.278 SO libspdk_bdev_malloc.so.6.0 00:06:44.278 SO libspdk_bdev_null.so.6.0 00:06:44.556 SYMLINK libspdk_bdev_malloc.so 00:06:44.556 CC module/bdev/raid/bdev_raid_sb.o 00:06:44.556 SYMLINK libspdk_bdev_null.so 00:06:44.556 CC module/bdev/raid/raid0.o 00:06:44.556 LIB libspdk_bdev_passthru.a 00:06:44.556 CC module/bdev/split/vbdev_split_rpc.o 00:06:44.556 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:44.556 SO libspdk_bdev_passthru.so.6.0 00:06:44.556 CC module/bdev/xnvme/bdev_xnvme.o 00:06:44.556 SYMLINK libspdk_bdev_passthru.so 00:06:44.556 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:06:44.815 CC module/bdev/aio/bdev_aio.o 00:06:44.815 CC module/bdev/aio/bdev_aio_rpc.o 00:06:44.815 LIB libspdk_bdev_split.a 00:06:44.815 SO libspdk_bdev_split.so.6.0 00:06:44.815 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:44.815 CC module/bdev/raid/raid1.o 00:06:44.815 SYMLINK libspdk_bdev_split.so 00:06:44.815 CC module/bdev/raid/concat.o 00:06:44.815 CC module/bdev/nvme/bdev_mdns_client.o 00:06:45.073 LIB libspdk_bdev_xnvme.a 00:06:45.073 CC module/bdev/nvme/vbdev_opal.o 00:06:45.073 SO libspdk_bdev_xnvme.so.3.0 00:06:45.073 LIB libspdk_bdev_zone_block.a 00:06:45.073 SO libspdk_bdev_zone_block.so.6.0 00:06:45.073 LIB libspdk_bdev_aio.a 00:06:45.073 CC module/bdev/ftl/bdev_ftl.o 00:06:45.073 SYMLINK libspdk_bdev_xnvme.so 00:06:45.073 SO libspdk_bdev_aio.so.6.0 00:06:45.073 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:45.073 SYMLINK libspdk_bdev_zone_block.so 00:06:45.073 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:45.073 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:45.073 SYMLINK libspdk_bdev_aio.so 00:06:45.331 CC module/bdev/iscsi/bdev_iscsi.o 00:06:45.331 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:45.331 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:45.331 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:45.331 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:45.331 LIB libspdk_bdev_raid.a 00:06:45.331 LIB libspdk_bdev_ftl.a 00:06:45.590 SO libspdk_bdev_ftl.so.6.0 00:06:45.590 SO libspdk_bdev_raid.so.6.0 00:06:45.590 SYMLINK libspdk_bdev_ftl.so 00:06:45.590 SYMLINK libspdk_bdev_raid.so 00:06:45.590 LIB libspdk_bdev_iscsi.a 00:06:45.848 SO libspdk_bdev_iscsi.so.6.0 00:06:45.848 SYMLINK libspdk_bdev_iscsi.so 00:06:45.848 LIB libspdk_bdev_virtio.a 00:06:46.106 SO libspdk_bdev_virtio.so.6.0 00:06:46.107 SYMLINK libspdk_bdev_virtio.so 00:06:47.045 LIB libspdk_bdev_nvme.a 00:06:47.305 SO libspdk_bdev_nvme.so.7.1 00:06:47.305 SYMLINK libspdk_bdev_nvme.so 00:06:47.873 CC module/event/subsystems/iobuf/iobuf.o 00:06:47.873 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:47.873 CC module/event/subsystems/vmd/vmd.o 00:06:47.873 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:47.873 CC module/event/subsystems/sock/sock.o 00:06:47.873 CC module/event/subsystems/keyring/keyring.o 00:06:47.873 CC module/event/subsystems/fsdev/fsdev.o 00:06:47.873 CC module/event/subsystems/scheduler/scheduler.o 00:06:47.873 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:48.132 LIB libspdk_event_sock.a 00:06:48.132 LIB libspdk_event_vmd.a 00:06:48.132 LIB libspdk_event_vhost_blk.a 00:06:48.132 LIB libspdk_event_scheduler.a 00:06:48.132 LIB libspdk_event_iobuf.a 00:06:48.132 LIB libspdk_event_fsdev.a 00:06:48.132 SO libspdk_event_sock.so.5.0 00:06:48.132 SO libspdk_event_vhost_blk.so.3.0 00:06:48.132 SO libspdk_event_vmd.so.6.0 00:06:48.132 SO libspdk_event_scheduler.so.4.0 00:06:48.132 SO libspdk_event_iobuf.so.3.0 00:06:48.132 SO libspdk_event_fsdev.so.1.0 00:06:48.132 LIB libspdk_event_keyring.a 00:06:48.132 SO libspdk_event_keyring.so.1.0 00:06:48.132 SYMLINK libspdk_event_vhost_blk.so 00:06:48.132 SYMLINK libspdk_event_sock.so 00:06:48.132 SYMLINK libspdk_event_scheduler.so 00:06:48.132 SYMLINK libspdk_event_vmd.so 00:06:48.132 SYMLINK libspdk_event_fsdev.so 00:06:48.132 SYMLINK libspdk_event_iobuf.so 00:06:48.132 SYMLINK libspdk_event_keyring.so 00:06:48.700 CC module/event/subsystems/accel/accel.o 00:06:48.700 LIB libspdk_event_accel.a 00:06:48.700 SO libspdk_event_accel.so.6.0 00:06:48.958 SYMLINK libspdk_event_accel.so 00:06:49.217 CC module/event/subsystems/bdev/bdev.o 00:06:49.476 LIB libspdk_event_bdev.a 00:06:49.476 SO libspdk_event_bdev.so.6.0 00:06:49.476 SYMLINK libspdk_event_bdev.so 00:06:49.735 CC module/event/subsystems/scsi/scsi.o 00:06:49.735 CC module/event/subsystems/nbd/nbd.o 00:06:49.735 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:49.735 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:49.735 CC module/event/subsystems/ublk/ublk.o 00:06:49.993 LIB libspdk_event_nbd.a 00:06:49.993 LIB libspdk_event_scsi.a 00:06:49.993 SO libspdk_event_nbd.so.6.0 00:06:49.993 LIB libspdk_event_ublk.a 00:06:49.993 SO libspdk_event_scsi.so.6.0 00:06:49.993 SO libspdk_event_ublk.so.3.0 00:06:49.994 LIB libspdk_event_nvmf.a 00:06:49.994 SYMLINK libspdk_event_nbd.so 00:06:49.994 SYMLINK libspdk_event_scsi.so 00:06:50.252 SYMLINK libspdk_event_ublk.so 00:06:50.252 SO libspdk_event_nvmf.so.6.0 00:06:50.252 SYMLINK libspdk_event_nvmf.so 00:06:50.252 CC module/event/subsystems/iscsi/iscsi.o 00:06:50.510 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:50.510 LIB libspdk_event_iscsi.a 00:06:50.510 SO libspdk_event_iscsi.so.6.0 00:06:50.510 LIB libspdk_event_vhost_scsi.a 00:06:50.769 SO libspdk_event_vhost_scsi.so.3.0 00:06:50.769 SYMLINK libspdk_event_iscsi.so 00:06:50.769 SYMLINK libspdk_event_vhost_scsi.so 00:06:51.028 SO libspdk.so.6.0 00:06:51.028 SYMLINK libspdk.so 00:06:51.287 CC app/trace_record/trace_record.o 00:06:51.287 TEST_HEADER include/spdk/accel.h 00:06:51.287 TEST_HEADER include/spdk/accel_module.h 00:06:51.287 CXX app/trace/trace.o 00:06:51.287 TEST_HEADER include/spdk/assert.h 00:06:51.287 TEST_HEADER include/spdk/barrier.h 00:06:51.287 TEST_HEADER include/spdk/base64.h 00:06:51.287 TEST_HEADER include/spdk/bdev.h 00:06:51.287 TEST_HEADER include/spdk/bdev_module.h 00:06:51.287 TEST_HEADER include/spdk/bdev_zone.h 00:06:51.287 TEST_HEADER include/spdk/bit_array.h 00:06:51.287 TEST_HEADER include/spdk/bit_pool.h 00:06:51.287 TEST_HEADER include/spdk/blob_bdev.h 00:06:51.287 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:51.287 TEST_HEADER include/spdk/blobfs.h 00:06:51.287 TEST_HEADER include/spdk/blob.h 00:06:51.287 TEST_HEADER include/spdk/conf.h 00:06:51.287 TEST_HEADER include/spdk/config.h 00:06:51.287 TEST_HEADER include/spdk/cpuset.h 00:06:51.287 TEST_HEADER include/spdk/crc16.h 00:06:51.287 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:51.287 TEST_HEADER include/spdk/crc32.h 00:06:51.287 TEST_HEADER include/spdk/crc64.h 00:06:51.287 TEST_HEADER include/spdk/dif.h 00:06:51.287 CC app/nvmf_tgt/nvmf_main.o 00:06:51.287 TEST_HEADER include/spdk/dma.h 00:06:51.287 TEST_HEADER include/spdk/endian.h 00:06:51.287 TEST_HEADER include/spdk/env_dpdk.h 00:06:51.287 TEST_HEADER include/spdk/env.h 00:06:51.287 TEST_HEADER include/spdk/event.h 00:06:51.288 TEST_HEADER include/spdk/fd_group.h 00:06:51.288 TEST_HEADER include/spdk/fd.h 00:06:51.288 TEST_HEADER include/spdk/file.h 00:06:51.288 TEST_HEADER include/spdk/fsdev.h 00:06:51.288 TEST_HEADER include/spdk/fsdev_module.h 00:06:51.288 TEST_HEADER include/spdk/ftl.h 00:06:51.288 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:51.288 TEST_HEADER include/spdk/gpt_spec.h 00:06:51.288 CC examples/ioat/perf/perf.o 00:06:51.288 TEST_HEADER include/spdk/hexlify.h 00:06:51.288 TEST_HEADER include/spdk/histogram_data.h 00:06:51.288 TEST_HEADER include/spdk/idxd.h 00:06:51.288 CC test/thread/poller_perf/poller_perf.o 00:06:51.288 TEST_HEADER include/spdk/idxd_spec.h 00:06:51.288 TEST_HEADER include/spdk/init.h 00:06:51.288 TEST_HEADER include/spdk/ioat.h 00:06:51.288 TEST_HEADER include/spdk/ioat_spec.h 00:06:51.288 TEST_HEADER include/spdk/iscsi_spec.h 00:06:51.288 CC examples/util/zipf/zipf.o 00:06:51.288 TEST_HEADER include/spdk/json.h 00:06:51.288 TEST_HEADER include/spdk/jsonrpc.h 00:06:51.288 TEST_HEADER include/spdk/keyring.h 00:06:51.288 TEST_HEADER include/spdk/keyring_module.h 00:06:51.288 TEST_HEADER include/spdk/likely.h 00:06:51.288 TEST_HEADER include/spdk/log.h 00:06:51.288 TEST_HEADER include/spdk/lvol.h 00:06:51.288 TEST_HEADER include/spdk/md5.h 00:06:51.288 TEST_HEADER include/spdk/memory.h 00:06:51.288 TEST_HEADER include/spdk/mmio.h 00:06:51.288 TEST_HEADER include/spdk/nbd.h 00:06:51.288 TEST_HEADER include/spdk/net.h 00:06:51.288 TEST_HEADER include/spdk/notify.h 00:06:51.288 TEST_HEADER include/spdk/nvme.h 00:06:51.288 TEST_HEADER include/spdk/nvme_intel.h 00:06:51.288 CC test/app/bdev_svc/bdev_svc.o 00:06:51.288 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:51.288 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:51.288 TEST_HEADER include/spdk/nvme_spec.h 00:06:51.288 TEST_HEADER include/spdk/nvme_zns.h 00:06:51.288 CC test/dma/test_dma/test_dma.o 00:06:51.288 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:51.288 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:51.288 TEST_HEADER include/spdk/nvmf.h 00:06:51.288 TEST_HEADER include/spdk/nvmf_spec.h 00:06:51.288 TEST_HEADER include/spdk/nvmf_transport.h 00:06:51.288 TEST_HEADER include/spdk/opal.h 00:06:51.288 TEST_HEADER include/spdk/opal_spec.h 00:06:51.288 TEST_HEADER include/spdk/pci_ids.h 00:06:51.288 TEST_HEADER include/spdk/pipe.h 00:06:51.288 TEST_HEADER include/spdk/queue.h 00:06:51.288 TEST_HEADER include/spdk/reduce.h 00:06:51.288 TEST_HEADER include/spdk/rpc.h 00:06:51.288 TEST_HEADER include/spdk/scheduler.h 00:06:51.547 TEST_HEADER include/spdk/scsi.h 00:06:51.547 TEST_HEADER include/spdk/scsi_spec.h 00:06:51.547 TEST_HEADER include/spdk/sock.h 00:06:51.547 TEST_HEADER include/spdk/stdinc.h 00:06:51.547 TEST_HEADER include/spdk/string.h 00:06:51.547 TEST_HEADER include/spdk/thread.h 00:06:51.547 TEST_HEADER include/spdk/trace.h 00:06:51.547 TEST_HEADER include/spdk/trace_parser.h 00:06:51.547 TEST_HEADER include/spdk/tree.h 00:06:51.547 TEST_HEADER include/spdk/ublk.h 00:06:51.547 TEST_HEADER include/spdk/util.h 00:06:51.547 TEST_HEADER include/spdk/uuid.h 00:06:51.547 TEST_HEADER include/spdk/version.h 00:06:51.547 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:51.547 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:51.547 TEST_HEADER include/spdk/vhost.h 00:06:51.547 TEST_HEADER include/spdk/vmd.h 00:06:51.547 TEST_HEADER include/spdk/xor.h 00:06:51.547 TEST_HEADER include/spdk/zipf.h 00:06:51.547 CXX test/cpp_headers/accel.o 00:06:51.547 LINK interrupt_tgt 00:06:51.547 LINK spdk_trace_record 00:06:51.547 LINK poller_perf 00:06:51.547 LINK bdev_svc 00:06:51.547 LINK nvmf_tgt 00:06:51.547 LINK ioat_perf 00:06:51.547 LINK zipf 00:06:51.805 CXX test/cpp_headers/accel_module.o 00:06:51.805 CXX test/cpp_headers/assert.o 00:06:51.805 CC test/rpc_client/rpc_client_test.o 00:06:52.064 CC app/iscsi_tgt/iscsi_tgt.o 00:06:52.064 CC test/env/vtophys/vtophys.o 00:06:52.064 CC examples/ioat/verify/verify.o 00:06:52.064 LINK spdk_trace 00:06:52.064 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:52.064 LINK test_dma 00:06:52.064 CC app/spdk_tgt/spdk_tgt.o 00:06:52.064 CC test/env/mem_callbacks/mem_callbacks.o 00:06:52.064 CXX test/cpp_headers/barrier.o 00:06:52.064 LINK vtophys 00:06:52.064 LINK iscsi_tgt 00:06:52.323 LINK verify 00:06:52.323 CXX test/cpp_headers/base64.o 00:06:52.323 CXX test/cpp_headers/bdev.o 00:06:52.323 LINK spdk_tgt 00:06:52.323 LINK rpc_client_test 00:06:52.323 CXX test/cpp_headers/bdev_module.o 00:06:52.582 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:52.582 CC test/env/memory/memory_ut.o 00:06:52.582 CC test/env/pci/pci_ut.o 00:06:52.582 CC test/app/histogram_perf/histogram_perf.o 00:06:52.582 CC test/app/jsoncat/jsoncat.o 00:06:52.582 CXX test/cpp_headers/bdev_zone.o 00:06:52.582 CC app/spdk_lspci/spdk_lspci.o 00:06:52.582 LINK env_dpdk_post_init 00:06:52.582 LINK nvme_fuzz 00:06:52.840 LINK jsoncat 00:06:52.840 CC examples/thread/thread/thread_ex.o 00:06:52.840 LINK histogram_perf 00:06:52.840 LINK spdk_lspci 00:06:52.840 LINK mem_callbacks 00:06:52.840 CXX test/cpp_headers/bit_array.o 00:06:52.840 CXX test/cpp_headers/bit_pool.o 00:06:53.099 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:53.099 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:53.099 LINK thread 00:06:53.099 CC app/spdk_nvme_perf/perf.o 00:06:53.099 LINK pci_ut 00:06:53.099 CC examples/sock/hello_world/hello_sock.o 00:06:53.099 CXX test/cpp_headers/blob_bdev.o 00:06:53.099 CC app/spdk_nvme_identify/identify.o 00:06:53.099 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:53.358 CC examples/vmd/lsvmd/lsvmd.o 00:06:53.358 CXX test/cpp_headers/blobfs_bdev.o 00:06:53.358 LINK hello_sock 00:06:53.358 CC examples/vmd/led/led.o 00:06:53.358 CXX test/cpp_headers/blobfs.o 00:06:53.358 LINK lsvmd 00:06:53.618 CXX test/cpp_headers/blob.o 00:06:53.618 CC test/app/stub/stub.o 00:06:53.618 LINK led 00:06:53.618 CC app/spdk_nvme_discover/discovery_aer.o 00:06:53.618 LINK vhost_fuzz 00:06:53.907 CXX test/cpp_headers/conf.o 00:06:53.907 LINK memory_ut 00:06:53.907 CC test/event/event_perf/event_perf.o 00:06:53.907 LINK stub 00:06:53.907 LINK spdk_nvme_discover 00:06:53.907 CXX test/cpp_headers/config.o 00:06:53.907 LINK event_perf 00:06:54.187 CXX test/cpp_headers/cpuset.o 00:06:54.187 LINK spdk_nvme_perf 00:06:54.187 CC examples/idxd/perf/perf.o 00:06:54.187 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:54.187 CC app/spdk_top/spdk_top.o 00:06:54.187 CXX test/cpp_headers/crc16.o 00:06:54.187 LINK spdk_nvme_identify 00:06:54.187 CC test/nvme/aer/aer.o 00:06:54.187 CC test/event/reactor/reactor.o 00:06:54.187 CXX test/cpp_headers/crc32.o 00:06:54.447 CC app/vhost/vhost.o 00:06:54.447 LINK hello_fsdev 00:06:54.447 LINK reactor 00:06:54.447 CXX test/cpp_headers/crc64.o 00:06:54.447 LINK idxd_perf 00:06:54.447 CC examples/accel/perf/accel_perf.o 00:06:54.447 LINK aer 00:06:54.705 LINK vhost 00:06:54.705 CC examples/blob/hello_world/hello_blob.o 00:06:54.705 CXX test/cpp_headers/dif.o 00:06:54.705 CC test/event/reactor_perf/reactor_perf.o 00:06:54.705 CC test/nvme/reset/reset.o 00:06:54.705 CC examples/blob/cli/blobcli.o 00:06:54.705 CXX test/cpp_headers/dma.o 00:06:54.705 LINK reactor_perf 00:06:54.963 LINK hello_blob 00:06:54.963 CC examples/nvme/hello_world/hello_world.o 00:06:54.963 CC examples/nvme/reconnect/reconnect.o 00:06:54.963 CXX test/cpp_headers/endian.o 00:06:54.964 LINK reset 00:06:54.964 LINK accel_perf 00:06:54.964 CC test/event/app_repeat/app_repeat.o 00:06:55.227 CXX test/cpp_headers/env_dpdk.o 00:06:55.227 LINK iscsi_fuzz 00:06:55.227 LINK hello_world 00:06:55.227 LINK spdk_top 00:06:55.227 CC test/event/scheduler/scheduler.o 00:06:55.227 LINK app_repeat 00:06:55.227 LINK reconnect 00:06:55.227 CXX test/cpp_headers/env.o 00:06:55.227 CC test/nvme/sgl/sgl.o 00:06:55.227 CC test/nvme/e2edp/nvme_dp.o 00:06:55.227 LINK blobcli 00:06:55.486 LINK scheduler 00:06:55.486 CC app/spdk_dd/spdk_dd.o 00:06:55.486 CXX test/cpp_headers/event.o 00:06:55.486 CC app/fio/nvme/fio_plugin.o 00:06:55.486 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:55.486 CC test/accel/dif/dif.o 00:06:55.486 LINK sgl 00:06:55.745 LINK nvme_dp 00:06:55.745 CXX test/cpp_headers/fd_group.o 00:06:55.745 CC test/blobfs/mkfs/mkfs.o 00:06:55.745 CC examples/bdev/bdevperf/bdevperf.o 00:06:55.745 CC examples/bdev/hello_world/hello_bdev.o 00:06:55.745 CXX test/cpp_headers/fd.o 00:06:56.062 LINK spdk_dd 00:06:56.062 LINK mkfs 00:06:56.062 CC test/nvme/overhead/overhead.o 00:06:56.062 CC app/fio/bdev/fio_plugin.o 00:06:56.062 CXX test/cpp_headers/file.o 00:06:56.062 LINK hello_bdev 00:06:56.062 LINK nvme_manage 00:06:56.062 LINK spdk_nvme 00:06:56.062 CC examples/nvme/arbitration/arbitration.o 00:06:56.321 CXX test/cpp_headers/fsdev.o 00:06:56.321 LINK overhead 00:06:56.321 CXX test/cpp_headers/fsdev_module.o 00:06:56.321 CC test/nvme/err_injection/err_injection.o 00:06:56.321 CC test/lvol/esnap/esnap.o 00:06:56.321 LINK dif 00:06:56.321 CC test/nvme/startup/startup.o 00:06:56.321 CXX test/cpp_headers/ftl.o 00:06:56.579 CC test/nvme/reserve/reserve.o 00:06:56.579 CC test/nvme/simple_copy/simple_copy.o 00:06:56.579 LINK spdk_bdev 00:06:56.579 LINK arbitration 00:06:56.579 LINK err_injection 00:06:56.579 LINK startup 00:06:56.579 CXX test/cpp_headers/fuse_dispatcher.o 00:06:56.579 CC test/nvme/connect_stress/connect_stress.o 00:06:56.838 LINK reserve 00:06:56.838 CC test/nvme/boot_partition/boot_partition.o 00:06:56.838 LINK simple_copy 00:06:56.838 LINK bdevperf 00:06:56.838 CC examples/nvme/hotplug/hotplug.o 00:06:56.838 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:56.838 CC test/nvme/compliance/nvme_compliance.o 00:06:56.838 CXX test/cpp_headers/gpt_spec.o 00:06:56.838 LINK boot_partition 00:06:56.838 LINK connect_stress 00:06:57.097 CC test/nvme/fused_ordering/fused_ordering.o 00:06:57.097 CXX test/cpp_headers/hexlify.o 00:06:57.097 LINK cmb_copy 00:06:57.097 LINK hotplug 00:06:57.097 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:57.097 CC examples/nvme/abort/abort.o 00:06:57.097 CC test/nvme/fdp/fdp.o 00:06:57.097 CXX test/cpp_headers/histogram_data.o 00:06:57.356 LINK fused_ordering 00:06:57.356 LINK nvme_compliance 00:06:57.356 CC test/bdev/bdevio/bdevio.o 00:06:57.356 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:57.356 LINK doorbell_aers 00:06:57.356 CC test/nvme/cuse/cuse.o 00:06:57.356 CXX test/cpp_headers/idxd.o 00:06:57.356 CXX test/cpp_headers/idxd_spec.o 00:06:57.356 CXX test/cpp_headers/init.o 00:06:57.615 CXX test/cpp_headers/ioat.o 00:06:57.615 LINK pmr_persistence 00:06:57.615 CXX test/cpp_headers/ioat_spec.o 00:06:57.615 CXX test/cpp_headers/iscsi_spec.o 00:06:57.615 LINK fdp 00:06:57.615 LINK abort 00:06:57.615 CXX test/cpp_headers/json.o 00:06:57.615 CXX test/cpp_headers/jsonrpc.o 00:06:57.615 CXX test/cpp_headers/keyring.o 00:06:57.615 LINK bdevio 00:06:57.615 CXX test/cpp_headers/keyring_module.o 00:06:57.877 CXX test/cpp_headers/likely.o 00:06:57.877 CXX test/cpp_headers/log.o 00:06:57.877 CXX test/cpp_headers/lvol.o 00:06:57.877 CXX test/cpp_headers/md5.o 00:06:57.877 CXX test/cpp_headers/memory.o 00:06:57.877 CXX test/cpp_headers/mmio.o 00:06:57.877 CXX test/cpp_headers/nbd.o 00:06:57.877 CXX test/cpp_headers/net.o 00:06:57.877 CXX test/cpp_headers/notify.o 00:06:57.877 CXX test/cpp_headers/nvme.o 00:06:58.145 CXX test/cpp_headers/nvme_intel.o 00:06:58.145 CC examples/nvmf/nvmf/nvmf.o 00:06:58.145 CXX test/cpp_headers/nvme_ocssd.o 00:06:58.145 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:58.145 CXX test/cpp_headers/nvme_spec.o 00:06:58.145 CXX test/cpp_headers/nvme_zns.o 00:06:58.145 CXX test/cpp_headers/nvmf_cmd.o 00:06:58.145 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:58.145 CXX test/cpp_headers/nvmf.o 00:06:58.145 CXX test/cpp_headers/nvmf_spec.o 00:06:58.145 CXX test/cpp_headers/nvmf_transport.o 00:06:58.415 CXX test/cpp_headers/opal.o 00:06:58.415 LINK nvmf 00:06:58.415 CXX test/cpp_headers/opal_spec.o 00:06:58.415 CXX test/cpp_headers/pci_ids.o 00:06:58.415 CXX test/cpp_headers/pipe.o 00:06:58.415 CXX test/cpp_headers/queue.o 00:06:58.415 CXX test/cpp_headers/reduce.o 00:06:58.415 CXX test/cpp_headers/rpc.o 00:06:58.415 CXX test/cpp_headers/scheduler.o 00:06:58.415 CXX test/cpp_headers/scsi.o 00:06:58.415 CXX test/cpp_headers/scsi_spec.o 00:06:58.415 CXX test/cpp_headers/sock.o 00:06:58.415 CXX test/cpp_headers/stdinc.o 00:06:58.675 CXX test/cpp_headers/string.o 00:06:58.675 CXX test/cpp_headers/thread.o 00:06:58.675 CXX test/cpp_headers/trace.o 00:06:58.675 CXX test/cpp_headers/trace_parser.o 00:06:58.675 CXX test/cpp_headers/tree.o 00:06:58.675 CXX test/cpp_headers/ublk.o 00:06:58.675 CXX test/cpp_headers/util.o 00:06:58.675 CXX test/cpp_headers/uuid.o 00:06:58.675 CXX test/cpp_headers/version.o 00:06:58.675 CXX test/cpp_headers/vfio_user_pci.o 00:06:58.675 CXX test/cpp_headers/vfio_user_spec.o 00:06:58.675 CXX test/cpp_headers/vhost.o 00:06:58.675 LINK cuse 00:06:58.675 CXX test/cpp_headers/vmd.o 00:06:58.934 CXX test/cpp_headers/xor.o 00:06:58.934 CXX test/cpp_headers/zipf.o 00:07:02.227 LINK esnap 00:07:02.794 00:07:02.794 real 1m29.036s 00:07:02.794 user 7m48.794s 00:07:02.794 sys 1m53.694s 00:07:02.794 10:12:09 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:02.794 ************************************ 00:07:02.794 END TEST make 00:07:02.794 ************************************ 00:07:02.794 10:12:09 make -- common/autotest_common.sh@10 -- $ set +x 00:07:02.794 10:12:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:02.794 10:12:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:02.794 10:12:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:02.794 10:12:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:02.794 10:12:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:02.794 10:12:09 -- pm/common@44 -- $ pid=5286 00:07:02.794 10:12:09 -- pm/common@50 -- $ kill -TERM 5286 00:07:02.794 10:12:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:02.794 10:12:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:02.794 10:12:09 -- pm/common@44 -- $ pid=5288 00:07:02.794 10:12:09 -- pm/common@50 -- $ kill -TERM 5288 00:07:02.794 10:12:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:02.794 10:12:09 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:02.794 10:12:09 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.794 10:12:09 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.794 10:12:09 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.794 10:12:09 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.794 10:12:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.795 10:12:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.795 10:12:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.795 10:12:09 -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.795 10:12:09 -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.795 10:12:09 -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.795 10:12:09 -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.795 10:12:09 -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.795 10:12:09 -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.795 10:12:09 -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.795 10:12:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.795 10:12:09 -- scripts/common.sh@344 -- # case "$op" in 00:07:02.795 10:12:09 -- scripts/common.sh@345 -- # : 1 00:07:02.795 10:12:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.795 10:12:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.795 10:12:09 -- scripts/common.sh@365 -- # decimal 1 00:07:02.795 10:12:09 -- scripts/common.sh@353 -- # local d=1 00:07:02.795 10:12:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.795 10:12:09 -- scripts/common.sh@355 -- # echo 1 00:07:02.795 10:12:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.795 10:12:09 -- scripts/common.sh@366 -- # decimal 2 00:07:02.795 10:12:09 -- scripts/common.sh@353 -- # local d=2 00:07:02.795 10:12:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.795 10:12:09 -- scripts/common.sh@355 -- # echo 2 00:07:02.795 10:12:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.795 10:12:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.795 10:12:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.795 10:12:09 -- scripts/common.sh@368 -- # return 0 00:07:02.795 10:12:09 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.795 10:12:09 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.795 --rc genhtml_branch_coverage=1 00:07:02.795 --rc genhtml_function_coverage=1 00:07:02.795 --rc genhtml_legend=1 00:07:02.795 --rc geninfo_all_blocks=1 00:07:02.795 --rc geninfo_unexecuted_blocks=1 00:07:02.795 00:07:02.795 ' 00:07:02.795 10:12:09 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.795 --rc genhtml_branch_coverage=1 00:07:02.795 --rc genhtml_function_coverage=1 00:07:02.795 --rc genhtml_legend=1 00:07:02.795 --rc geninfo_all_blocks=1 00:07:02.795 --rc geninfo_unexecuted_blocks=1 00:07:02.795 00:07:02.795 ' 00:07:02.795 10:12:09 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.795 --rc genhtml_branch_coverage=1 00:07:02.795 --rc genhtml_function_coverage=1 00:07:02.795 --rc genhtml_legend=1 00:07:02.795 --rc geninfo_all_blocks=1 00:07:02.795 --rc geninfo_unexecuted_blocks=1 00:07:02.795 00:07:02.795 ' 00:07:02.795 10:12:09 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.795 --rc genhtml_branch_coverage=1 00:07:02.795 --rc genhtml_function_coverage=1 00:07:02.795 --rc genhtml_legend=1 00:07:02.795 --rc geninfo_all_blocks=1 00:07:02.795 --rc geninfo_unexecuted_blocks=1 00:07:02.795 00:07:02.795 ' 00:07:02.795 10:12:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:02.795 10:12:09 -- nvmf/common.sh@7 -- # uname -s 00:07:02.795 10:12:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.795 10:12:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.795 10:12:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.795 10:12:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.795 10:12:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.795 10:12:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.795 10:12:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.795 10:12:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.795 10:12:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.795 10:12:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.795 10:12:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2f61114a-0326-46e8-aeb1-5f899d706120 00:07:02.795 10:12:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=2f61114a-0326-46e8-aeb1-5f899d706120 00:07:02.795 10:12:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.795 10:12:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.795 10:12:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:02.795 10:12:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.795 10:12:09 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.795 10:12:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.795 10:12:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.795 10:12:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.795 10:12:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.795 10:12:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.795 10:12:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.795 10:12:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.795 10:12:09 -- paths/export.sh@5 -- # export PATH 00:07:02.795 10:12:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.795 10:12:09 -- nvmf/common.sh@51 -- # : 0 00:07:02.795 10:12:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:02.795 10:12:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.054 10:12:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.054 10:12:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.054 10:12:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.054 10:12:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.054 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.054 10:12:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.054 10:12:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.054 10:12:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.054 10:12:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:03.054 10:12:09 -- spdk/autotest.sh@32 -- # uname -s 00:07:03.054 10:12:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:03.054 10:12:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:03.054 10:12:09 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:03.054 10:12:09 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:03.055 10:12:09 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:03.055 10:12:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:03.055 10:12:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:03.055 10:12:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:03.055 10:12:09 -- spdk/autotest.sh@48 -- # udevadm_pid=54794 00:07:03.055 10:12:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:03.055 10:12:09 -- pm/common@17 -- # local monitor 00:07:03.055 10:12:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:03.055 10:12:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:03.055 10:12:09 -- pm/common@21 -- # date +%s 00:07:03.055 10:12:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:03.055 10:12:09 -- pm/common@25 -- # sleep 1 00:07:03.055 10:12:09 -- pm/common@21 -- # date +%s 00:07:03.055 10:12:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732529529 00:07:03.055 10:12:09 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732529529 00:07:03.055 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732529529_collect-vmstat.pm.log 00:07:03.055 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732529529_collect-cpu-load.pm.log 00:07:03.993 10:12:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:03.993 10:12:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:03.993 10:12:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.993 10:12:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.993 10:12:11 -- spdk/autotest.sh@59 -- # create_test_list 00:07:03.993 10:12:11 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:03.993 10:12:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.993 10:12:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:03.993 10:12:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:03.993 10:12:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:03.993 10:12:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:03.993 10:12:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:03.993 10:12:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:03.993 10:12:11 -- common/autotest_common.sh@1457 -- # uname 00:07:03.993 10:12:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:03.993 10:12:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:03.993 10:12:11 -- common/autotest_common.sh@1477 -- # uname 00:07:03.993 10:12:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:03.993 10:12:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:03.993 10:12:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:04.253 lcov: LCOV version 1.15 00:07:04.253 10:12:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:19.158 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:19.158 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:37.245 10:12:41 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:37.245 10:12:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.245 10:12:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.245 10:12:41 -- spdk/autotest.sh@78 -- # rm -f 00:07:37.245 10:12:41 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:37.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.245 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:37.245 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:37.245 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:37.245 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:37.245 10:12:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:37.245 10:12:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:37.245 10:12:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:37.245 10:12:43 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:37.245 10:12:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.245 10:12:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:37.245 10:12:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:37.245 10:12:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.245 10:12:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:37.245 10:12:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:37.245 10:12:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.245 10:12:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:37.245 10:12:43 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:37.245 10:12:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.245 10:12:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:37.245 10:12:43 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:37.245 10:12:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.245 10:12:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:37.245 10:12:43 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:37.245 10:12:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.245 10:12:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:37.245 10:12:43 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:37.245 10:12:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:37.245 10:12:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:37.245 10:12:43 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:37.245 10:12:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:37.245 10:12:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:37.245 10:12:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:37.245 10:12:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:37.245 10:12:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:37.245 10:12:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:37.245 10:12:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:37.245 10:12:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:37.245 No valid GPT data, bailing 00:07:37.245 10:12:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:37.245 10:12:43 -- scripts/common.sh@394 -- # pt= 00:07:37.245 10:12:43 -- scripts/common.sh@395 -- # return 1 00:07:37.245 10:12:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:37.245 1+0 records in 00:07:37.245 1+0 records out 00:07:37.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175774 s, 59.7 MB/s 00:07:37.245 10:12:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:37.245 10:12:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:37.245 10:12:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:37.245 10:12:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:37.245 10:12:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:37.245 No valid GPT data, bailing 00:07:37.245 10:12:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:37.245 10:12:43 -- scripts/common.sh@394 -- # pt= 00:07:37.245 10:12:43 -- scripts/common.sh@395 -- # return 1 00:07:37.245 10:12:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:37.245 1+0 records in 00:07:37.245 1+0 records out 00:07:37.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430849 s, 243 MB/s 00:07:37.245 10:12:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:37.245 10:12:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:37.245 10:12:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:07:37.245 10:12:43 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:07:37.245 10:12:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:37.245 No valid GPT data, bailing 00:07:37.245 10:12:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:37.245 10:12:43 -- scripts/common.sh@394 -- # pt= 00:07:37.245 10:12:43 -- scripts/common.sh@395 -- # return 1 00:07:37.245 10:12:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:37.245 1+0 records in 00:07:37.245 1+0 records out 00:07:37.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601949 s, 174 MB/s 00:07:37.245 10:12:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:37.245 10:12:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:37.245 10:12:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:07:37.245 10:12:43 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:07:37.245 10:12:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:07:37.245 No valid GPT data, bailing 00:07:37.245 10:12:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:37.245 10:12:43 -- scripts/common.sh@394 -- # pt= 00:07:37.245 10:12:43 -- scripts/common.sh@395 -- # return 1 00:07:37.245 10:12:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:07:37.245 1+0 records in 00:07:37.245 1+0 records out 00:07:37.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00597919 s, 175 MB/s 00:07:37.245 10:12:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:37.245 10:12:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:37.245 10:12:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:07:37.245 10:12:43 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:07:37.245 10:12:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:07:37.245 No valid GPT data, bailing 00:07:37.245 10:12:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:37.246 10:12:43 -- scripts/common.sh@394 -- # pt= 00:07:37.246 10:12:43 -- scripts/common.sh@395 -- # return 1 00:07:37.246 10:12:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:07:37.246 1+0 records in 00:07:37.246 1+0 records out 00:07:37.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435294 s, 241 MB/s 00:07:37.246 10:12:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:37.246 10:12:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:37.246 10:12:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:07:37.246 10:12:43 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:07:37.246 10:12:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:37.246 No valid GPT data, bailing 00:07:37.246 10:12:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:37.246 10:12:43 -- scripts/common.sh@394 -- # pt= 00:07:37.246 10:12:43 -- scripts/common.sh@395 -- # return 1 00:07:37.246 10:12:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:37.246 1+0 records in 00:07:37.246 1+0 records out 00:07:37.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00583465 s, 180 MB/s 00:07:37.246 10:12:43 -- spdk/autotest.sh@105 -- # sync 00:07:37.246 10:12:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:37.246 10:12:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:37.246 10:12:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:39.779 10:12:46 -- spdk/autotest.sh@111 -- # uname -s 00:07:39.779 10:12:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:39.779 10:12:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:39.779 10:12:46 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:40.714 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:40.972 Hugepages 00:07:40.972 node hugesize free / total 00:07:40.972 node0 1048576kB 0 / 0 00:07:40.972 node0 2048kB 0 / 0 00:07:40.972 00:07:40.972 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:41.231 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:41.231 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:41.489 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:41.489 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:41.489 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:41.489 10:12:48 -- spdk/autotest.sh@117 -- # uname -s 00:07:41.747 10:12:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:41.747 10:12:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:41.747 10:12:48 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:42.314 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.248 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:43.248 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:43.248 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:43.248 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:43.248 10:12:50 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:44.623 10:12:51 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:44.623 10:12:51 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:44.623 10:12:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:44.623 10:12:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:44.623 10:12:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:44.623 10:12:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:44.623 10:12:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:44.623 10:12:51 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:44.623 10:12:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:44.623 10:12:51 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:44.623 10:12:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:44.623 10:12:51 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:44.881 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:45.139 Waiting for block devices as requested 00:07:45.412 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.412 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.675 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:45.675 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.944 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:50.944 10:12:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:50.944 10:12:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:50.944 10:12:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:50.944 10:12:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:50.944 10:12:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:50.945 10:12:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:50.945 10:12:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:50.945 10:12:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:50.945 10:12:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:50.945 10:12:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:50.945 10:12:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:50.945 10:12:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1543 -- # continue 00:07:50.945 10:12:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:50.945 10:12:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:50.945 10:12:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:50.945 10:12:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:50.945 10:12:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:50.945 10:12:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:50.945 10:12:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:50.945 10:12:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:50.945 10:12:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:50.945 10:12:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:50.945 10:12:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:50.945 10:12:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1543 -- # continue 00:07:50.945 10:12:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:50.945 10:12:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:50.945 10:12:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:50.945 10:12:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:07:50.945 10:12:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:50.945 10:12:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:50.945 10:12:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:50.945 10:12:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1543 -- # continue 00:07:50.945 10:12:57 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:50.945 10:12:57 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:50.945 10:12:57 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:50.945 10:12:57 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:07:50.945 10:12:57 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:50.945 10:12:57 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:50.945 10:12:57 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:07:50.945 10:12:57 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:07:50.945 10:12:57 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:50.945 10:12:57 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:50.945 10:12:57 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:50.945 10:12:57 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:50.945 10:12:57 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:50.945 10:12:57 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:50.945 10:12:57 -- common/autotest_common.sh@1543 -- # continue 00:07:50.945 10:12:57 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:50.945 10:12:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.945 10:12:57 -- common/autotest_common.sh@10 -- # set +x 00:07:50.945 10:12:57 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:50.945 10:12:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.945 10:12:57 -- common/autotest_common.sh@10 -- # set +x 00:07:50.945 10:12:57 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:51.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:52.079 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.337 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.337 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.337 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:52.337 10:12:59 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:52.337 10:12:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.337 10:12:59 -- common/autotest_common.sh@10 -- # set +x 00:07:52.598 10:12:59 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:52.598 10:12:59 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:52.598 10:12:59 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:52.598 10:12:59 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:52.598 10:12:59 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:52.598 10:12:59 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:52.598 10:12:59 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:52.598 10:12:59 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:52.598 10:12:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:52.598 10:12:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:52.598 10:12:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:52.598 10:12:59 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:52.598 10:12:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:52.598 10:12:59 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:52.598 10:12:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:52.598 10:12:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:52.598 10:12:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:52.598 10:12:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:52.598 10:12:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:52.598 10:12:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:52.598 10:12:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:52.598 10:12:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:52.598 10:12:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:52.598 10:12:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:52.598 10:12:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:52.598 10:12:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:52.598 10:12:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:52.598 10:12:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:52.598 10:12:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:52.598 10:12:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:52.598 10:12:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:52.598 10:12:59 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:52.598 10:12:59 -- common/autotest_common.sh@1572 -- # return 0 00:07:52.598 10:12:59 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:52.598 10:12:59 -- common/autotest_common.sh@1580 -- # return 0 00:07:52.598 10:12:59 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:52.598 10:12:59 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:52.598 10:12:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:52.598 10:12:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:52.598 10:12:59 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:52.598 10:12:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.598 10:12:59 -- common/autotest_common.sh@10 -- # set +x 00:07:52.598 10:12:59 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:52.598 10:12:59 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:52.598 10:12:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.598 10:12:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.598 10:12:59 -- common/autotest_common.sh@10 -- # set +x 00:07:52.598 ************************************ 00:07:52.598 START TEST env 00:07:52.598 ************************************ 00:07:52.598 10:12:59 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:52.858 * Looking for test storage... 00:07:52.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:52.858 10:12:59 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.858 10:12:59 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.858 10:12:59 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:52.858 10:12:59 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:52.858 10:12:59 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.858 10:12:59 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.858 10:12:59 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.858 10:12:59 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.858 10:12:59 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.858 10:12:59 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.858 10:12:59 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.858 10:12:59 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.858 10:12:59 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.858 10:12:59 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.858 10:12:59 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.858 10:12:59 env -- scripts/common.sh@344 -- # case "$op" in 00:07:52.858 10:12:59 env -- scripts/common.sh@345 -- # : 1 00:07:52.858 10:12:59 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.859 10:12:59 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.859 10:12:59 env -- scripts/common.sh@365 -- # decimal 1 00:07:52.859 10:12:59 env -- scripts/common.sh@353 -- # local d=1 00:07:52.859 10:12:59 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.859 10:12:59 env -- scripts/common.sh@355 -- # echo 1 00:07:52.859 10:12:59 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.859 10:12:59 env -- scripts/common.sh@366 -- # decimal 2 00:07:52.859 10:12:59 env -- scripts/common.sh@353 -- # local d=2 00:07:52.859 10:12:59 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.859 10:12:59 env -- scripts/common.sh@355 -- # echo 2 00:07:52.859 10:12:59 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.859 10:12:59 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.859 10:12:59 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.859 10:12:59 env -- scripts/common.sh@368 -- # return 0 00:07:52.859 10:12:59 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.859 10:12:59 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:52.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.859 --rc genhtml_branch_coverage=1 00:07:52.859 --rc genhtml_function_coverage=1 00:07:52.859 --rc genhtml_legend=1 00:07:52.859 --rc geninfo_all_blocks=1 00:07:52.859 --rc geninfo_unexecuted_blocks=1 00:07:52.859 00:07:52.859 ' 00:07:52.859 10:12:59 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:52.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.859 --rc genhtml_branch_coverage=1 00:07:52.859 --rc genhtml_function_coverage=1 00:07:52.859 --rc genhtml_legend=1 00:07:52.859 --rc geninfo_all_blocks=1 00:07:52.859 --rc geninfo_unexecuted_blocks=1 00:07:52.859 00:07:52.859 ' 00:07:52.859 10:12:59 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:52.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.859 --rc genhtml_branch_coverage=1 00:07:52.859 --rc genhtml_function_coverage=1 00:07:52.859 --rc genhtml_legend=1 00:07:52.859 --rc geninfo_all_blocks=1 00:07:52.859 --rc geninfo_unexecuted_blocks=1 00:07:52.859 00:07:52.859 ' 00:07:52.859 10:12:59 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:52.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.859 --rc genhtml_branch_coverage=1 00:07:52.859 --rc genhtml_function_coverage=1 00:07:52.859 --rc genhtml_legend=1 00:07:52.859 --rc geninfo_all_blocks=1 00:07:52.859 --rc geninfo_unexecuted_blocks=1 00:07:52.859 00:07:52.859 ' 00:07:52.859 10:12:59 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:52.859 10:12:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.859 10:12:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.859 10:12:59 env -- common/autotest_common.sh@10 -- # set +x 00:07:52.859 ************************************ 00:07:52.859 START TEST env_memory 00:07:52.859 ************************************ 00:07:52.859 10:12:59 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:52.859 00:07:52.859 00:07:52.859 CUnit - A unit testing framework for C - Version 2.1-3 00:07:52.859 http://cunit.sourceforge.net/ 00:07:52.859 00:07:52.859 00:07:52.859 Suite: memory 00:07:52.859 Test: alloc and free memory map ...[2024-11-25 10:12:59.944320] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:53.119 passed 00:07:53.119 Test: mem map translation ...[2024-11-25 10:12:59.989788] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:53.119 [2024-11-25 10:12:59.990012] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:53.119 [2024-11-25 10:12:59.990183] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:53.119 [2024-11-25 10:12:59.990211] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:53.119 passed 00:07:53.119 Test: mem map registration ...[2024-11-25 10:13:00.064011] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:53.119 [2024-11-25 10:13:00.064090] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:53.119 passed 00:07:53.119 Test: mem map adjacent registrations ...passed 00:07:53.119 00:07:53.119 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.119 suites 1 1 n/a 0 0 00:07:53.119 tests 4 4 4 0 0 00:07:53.119 asserts 152 152 152 0 n/a 00:07:53.119 00:07:53.119 Elapsed time = 0.253 seconds 00:07:53.119 00:07:53.119 real 0m0.303s 00:07:53.119 user 0m0.261s 00:07:53.119 sys 0m0.029s 00:07:53.119 10:13:00 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.119 ************************************ 00:07:53.119 END TEST env_memory 00:07:53.119 ************************************ 00:07:53.119 10:13:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:53.378 10:13:00 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:53.378 10:13:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.378 10:13:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.378 10:13:00 env -- common/autotest_common.sh@10 -- # set +x 00:07:53.378 ************************************ 00:07:53.378 START TEST env_vtophys 00:07:53.378 ************************************ 00:07:53.378 10:13:00 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:53.378 EAL: lib.eal log level changed from notice to debug 00:07:53.378 EAL: Detected lcore 0 as core 0 on socket 0 00:07:53.378 EAL: Detected lcore 1 as core 0 on socket 0 00:07:53.378 EAL: Detected lcore 2 as core 0 on socket 0 00:07:53.378 EAL: Detected lcore 3 as core 0 on socket 0 00:07:53.378 EAL: Detected lcore 4 as core 0 on socket 0 00:07:53.378 EAL: Detected lcore 5 as core 0 on socket 0 00:07:53.378 EAL: Detected lcore 6 as core 0 on socket 0 00:07:53.378 EAL: Detected lcore 7 as core 0 on socket 0 00:07:53.378 EAL: Detected lcore 8 as core 0 on socket 0 00:07:53.378 EAL: Detected lcore 9 as core 0 on socket 0 00:07:53.378 EAL: Maximum logical cores by configuration: 128 00:07:53.378 EAL: Detected CPU lcores: 10 00:07:53.378 EAL: Detected NUMA nodes: 1 00:07:53.378 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:53.378 EAL: Detected shared linkage of DPDK 00:07:53.378 EAL: No shared files mode enabled, IPC will be disabled 00:07:53.378 EAL: Selected IOVA mode 'PA' 00:07:53.378 EAL: Probing VFIO support... 00:07:53.378 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:53.378 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:53.378 EAL: Ask a virtual area of 0x2e000 bytes 00:07:53.378 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:53.378 EAL: Setting up physically contiguous memory... 00:07:53.378 EAL: Setting maximum number of open files to 524288 00:07:53.378 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:53.378 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:53.378 EAL: Ask a virtual area of 0x61000 bytes 00:07:53.378 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:53.378 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:53.378 EAL: Ask a virtual area of 0x400000000 bytes 00:07:53.378 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:53.378 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:53.378 EAL: Ask a virtual area of 0x61000 bytes 00:07:53.378 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:53.378 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:53.378 EAL: Ask a virtual area of 0x400000000 bytes 00:07:53.378 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:53.378 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:53.378 EAL: Ask a virtual area of 0x61000 bytes 00:07:53.378 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:53.378 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:53.378 EAL: Ask a virtual area of 0x400000000 bytes 00:07:53.378 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:53.378 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:53.378 EAL: Ask a virtual area of 0x61000 bytes 00:07:53.378 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:53.378 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:53.378 EAL: Ask a virtual area of 0x400000000 bytes 00:07:53.378 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:53.378 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:53.378 EAL: Hugepages will be freed exactly as allocated. 00:07:53.378 EAL: No shared files mode enabled, IPC is disabled 00:07:53.378 EAL: No shared files mode enabled, IPC is disabled 00:07:53.378 EAL: TSC frequency is ~2490000 KHz 00:07:53.378 EAL: Main lcore 0 is ready (tid=7f8af2f40a40;cpuset=[0]) 00:07:53.378 EAL: Trying to obtain current memory policy. 00:07:53.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:53.378 EAL: Restoring previous memory policy: 0 00:07:53.378 EAL: request: mp_malloc_sync 00:07:53.378 EAL: No shared files mode enabled, IPC is disabled 00:07:53.378 EAL: Heap on socket 0 was expanded by 2MB 00:07:53.378 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:53.378 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:53.378 EAL: Mem event callback 'spdk:(nil)' registered 00:07:53.378 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:53.378 00:07:53.378 00:07:53.378 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.378 http://cunit.sourceforge.net/ 00:07:53.378 00:07:53.378 00:07:53.378 Suite: components_suite 00:07:53.946 Test: vtophys_malloc_test ...passed 00:07:53.946 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:53.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:53.946 EAL: Restoring previous memory policy: 4 00:07:53.946 EAL: Calling mem event callback 'spdk:(nil)' 00:07:53.946 EAL: request: mp_malloc_sync 00:07:53.946 EAL: No shared files mode enabled, IPC is disabled 00:07:53.946 EAL: Heap on socket 0 was expanded by 4MB 00:07:53.946 EAL: Calling mem event callback 'spdk:(nil)' 00:07:53.946 EAL: request: mp_malloc_sync 00:07:53.946 EAL: No shared files mode enabled, IPC is disabled 00:07:53.946 EAL: Heap on socket 0 was shrunk by 4MB 00:07:53.946 EAL: Trying to obtain current memory policy. 00:07:53.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:53.946 EAL: Restoring previous memory policy: 4 00:07:53.946 EAL: Calling mem event callback 'spdk:(nil)' 00:07:53.946 EAL: request: mp_malloc_sync 00:07:53.946 EAL: No shared files mode enabled, IPC is disabled 00:07:53.946 EAL: Heap on socket 0 was expanded by 6MB 00:07:53.946 EAL: Calling mem event callback 'spdk:(nil)' 00:07:53.946 EAL: request: mp_malloc_sync 00:07:53.946 EAL: No shared files mode enabled, IPC is disabled 00:07:53.946 EAL: Heap on socket 0 was shrunk by 6MB 00:07:53.946 EAL: Trying to obtain current memory policy. 00:07:53.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:53.946 EAL: Restoring previous memory policy: 4 00:07:53.946 EAL: Calling mem event callback 'spdk:(nil)' 00:07:53.946 EAL: request: mp_malloc_sync 00:07:53.946 EAL: No shared files mode enabled, IPC is disabled 00:07:53.946 EAL: Heap on socket 0 was expanded by 10MB 00:07:53.946 EAL: Calling mem event callback 'spdk:(nil)' 00:07:53.946 EAL: request: mp_malloc_sync 00:07:53.946 EAL: No shared files mode enabled, IPC is disabled 00:07:53.946 EAL: Heap on socket 0 was shrunk by 10MB 00:07:53.946 EAL: Trying to obtain current memory policy. 00:07:53.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:53.946 EAL: Restoring previous memory policy: 4 00:07:53.946 EAL: Calling mem event callback 'spdk:(nil)' 00:07:53.946 EAL: request: mp_malloc_sync 00:07:53.946 EAL: No shared files mode enabled, IPC is disabled 00:07:53.946 EAL: Heap on socket 0 was expanded by 18MB 00:07:53.946 EAL: Calling mem event callback 'spdk:(nil)' 00:07:53.946 EAL: request: mp_malloc_sync 00:07:53.946 EAL: No shared files mode enabled, IPC is disabled 00:07:53.946 EAL: Heap on socket 0 was shrunk by 18MB 00:07:54.205 EAL: Trying to obtain current memory policy. 00:07:54.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.205 EAL: Restoring previous memory policy: 4 00:07:54.205 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.205 EAL: request: mp_malloc_sync 00:07:54.205 EAL: No shared files mode enabled, IPC is disabled 00:07:54.205 EAL: Heap on socket 0 was expanded by 34MB 00:07:54.205 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.205 EAL: request: mp_malloc_sync 00:07:54.205 EAL: No shared files mode enabled, IPC is disabled 00:07:54.205 EAL: Heap on socket 0 was shrunk by 34MB 00:07:54.205 EAL: Trying to obtain current memory policy. 00:07:54.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.205 EAL: Restoring previous memory policy: 4 00:07:54.205 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.205 EAL: request: mp_malloc_sync 00:07:54.205 EAL: No shared files mode enabled, IPC is disabled 00:07:54.205 EAL: Heap on socket 0 was expanded by 66MB 00:07:54.464 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.464 EAL: request: mp_malloc_sync 00:07:54.464 EAL: No shared files mode enabled, IPC is disabled 00:07:54.464 EAL: Heap on socket 0 was shrunk by 66MB 00:07:54.464 EAL: Trying to obtain current memory policy. 00:07:54.464 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.464 EAL: Restoring previous memory policy: 4 00:07:54.464 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.464 EAL: request: mp_malloc_sync 00:07:54.464 EAL: No shared files mode enabled, IPC is disabled 00:07:54.464 EAL: Heap on socket 0 was expanded by 130MB 00:07:54.723 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.723 EAL: request: mp_malloc_sync 00:07:54.723 EAL: No shared files mode enabled, IPC is disabled 00:07:54.723 EAL: Heap on socket 0 was shrunk by 130MB 00:07:54.982 EAL: Trying to obtain current memory policy. 00:07:54.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:54.982 EAL: Restoring previous memory policy: 4 00:07:54.982 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.982 EAL: request: mp_malloc_sync 00:07:54.982 EAL: No shared files mode enabled, IPC is disabled 00:07:54.982 EAL: Heap on socket 0 was expanded by 258MB 00:07:55.551 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.551 EAL: request: mp_malloc_sync 00:07:55.551 EAL: No shared files mode enabled, IPC is disabled 00:07:55.551 EAL: Heap on socket 0 was shrunk by 258MB 00:07:56.120 EAL: Trying to obtain current memory policy. 00:07:56.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:56.120 EAL: Restoring previous memory policy: 4 00:07:56.120 EAL: Calling mem event callback 'spdk:(nil)' 00:07:56.120 EAL: request: mp_malloc_sync 00:07:56.120 EAL: No shared files mode enabled, IPC is disabled 00:07:56.120 EAL: Heap on socket 0 was expanded by 514MB 00:07:57.056 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.056 EAL: request: mp_malloc_sync 00:07:57.056 EAL: No shared files mode enabled, IPC is disabled 00:07:57.056 EAL: Heap on socket 0 was shrunk by 514MB 00:07:57.992 EAL: Trying to obtain current memory policy. 00:07:57.992 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:57.992 EAL: Restoring previous memory policy: 4 00:07:57.992 EAL: Calling mem event callback 'spdk:(nil)' 00:07:57.992 EAL: request: mp_malloc_sync 00:07:57.992 EAL: No shared files mode enabled, IPC is disabled 00:07:57.992 EAL: Heap on socket 0 was expanded by 1026MB 00:08:00.043 EAL: Calling mem event callback 'spdk:(nil)' 00:08:00.043 EAL: request: mp_malloc_sync 00:08:00.043 EAL: No shared files mode enabled, IPC is disabled 00:08:00.043 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:01.947 passed 00:08:01.947 00:08:01.947 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.947 suites 1 1 n/a 0 0 00:08:01.947 tests 2 2 2 0 0 00:08:01.947 asserts 5649 5649 5649 0 n/a 00:08:01.947 00:08:01.947 Elapsed time = 8.250 seconds 00:08:01.947 EAL: Calling mem event callback 'spdk:(nil)' 00:08:01.947 EAL: request: mp_malloc_sync 00:08:01.947 EAL: No shared files mode enabled, IPC is disabled 00:08:01.947 EAL: Heap on socket 0 was shrunk by 2MB 00:08:01.947 EAL: No shared files mode enabled, IPC is disabled 00:08:01.947 EAL: No shared files mode enabled, IPC is disabled 00:08:01.947 EAL: No shared files mode enabled, IPC is disabled 00:08:01.947 00:08:01.947 real 0m8.600s 00:08:01.947 user 0m7.508s 00:08:01.947 sys 0m0.928s 00:08:01.947 10:13:08 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.947 ************************************ 00:08:01.947 END TEST env_vtophys 00:08:01.947 ************************************ 00:08:01.947 10:13:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:01.947 10:13:08 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:01.947 10:13:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.947 10:13:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.947 10:13:08 env -- common/autotest_common.sh@10 -- # set +x 00:08:01.947 ************************************ 00:08:01.947 START TEST env_pci 00:08:01.947 ************************************ 00:08:01.947 10:13:08 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:01.947 00:08:01.947 00:08:01.948 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.948 http://cunit.sourceforge.net/ 00:08:01.948 00:08:01.948 00:08:01.948 Suite: pci 00:08:01.948 Test: pci_hook ...[2024-11-25 10:13:08.957230] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57635 has claimed it 00:08:01.948 passed 00:08:01.948 00:08:01.948 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.948 suites 1 1 n/a 0 0 00:08:01.948 tests 1 1 1 0 0 00:08:01.948 asserts 25 25 25 0 n/a 00:08:01.948 00:08:01.948 Elapsed time = 0.007 seconds 00:08:01.948 EAL: Cannot find device (10000:00:01.0) 00:08:01.948 EAL: Failed to attach device on primary process 00:08:01.948 00:08:01.948 real 0m0.108s 00:08:01.948 user 0m0.046s 00:08:01.948 sys 0m0.061s 00:08:01.948 10:13:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.948 ************************************ 00:08:01.948 END TEST env_pci 00:08:01.948 ************************************ 00:08:01.948 10:13:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:02.215 10:13:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:02.216 10:13:09 env -- env/env.sh@15 -- # uname 00:08:02.216 10:13:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:02.216 10:13:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:02.216 10:13:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:02.216 10:13:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.216 10:13:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.216 10:13:09 env -- common/autotest_common.sh@10 -- # set +x 00:08:02.216 ************************************ 00:08:02.216 START TEST env_dpdk_post_init 00:08:02.216 ************************************ 00:08:02.216 10:13:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:02.216 EAL: Detected CPU lcores: 10 00:08:02.216 EAL: Detected NUMA nodes: 1 00:08:02.216 EAL: Detected shared linkage of DPDK 00:08:02.216 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:02.216 EAL: Selected IOVA mode 'PA' 00:08:02.216 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:02.482 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:02.482 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:02.482 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:08:02.482 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:08:02.482 Starting DPDK initialization... 00:08:02.482 Starting SPDK post initialization... 00:08:02.482 SPDK NVMe probe 00:08:02.482 Attaching to 0000:00:10.0 00:08:02.482 Attaching to 0000:00:11.0 00:08:02.482 Attaching to 0000:00:12.0 00:08:02.482 Attaching to 0000:00:13.0 00:08:02.482 Attached to 0000:00:10.0 00:08:02.482 Attached to 0000:00:11.0 00:08:02.482 Attached to 0000:00:13.0 00:08:02.482 Attached to 0000:00:12.0 00:08:02.482 Cleaning up... 00:08:02.482 ************************************ 00:08:02.482 END TEST env_dpdk_post_init 00:08:02.482 ************************************ 00:08:02.482 00:08:02.482 real 0m0.313s 00:08:02.482 user 0m0.105s 00:08:02.482 sys 0m0.110s 00:08:02.482 10:13:09 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.482 10:13:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:02.482 10:13:09 env -- env/env.sh@26 -- # uname 00:08:02.482 10:13:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:02.482 10:13:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:02.482 10:13:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.482 10:13:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.482 10:13:09 env -- common/autotest_common.sh@10 -- # set +x 00:08:02.482 ************************************ 00:08:02.482 START TEST env_mem_callbacks 00:08:02.482 ************************************ 00:08:02.482 10:13:09 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:02.482 EAL: Detected CPU lcores: 10 00:08:02.482 EAL: Detected NUMA nodes: 1 00:08:02.482 EAL: Detected shared linkage of DPDK 00:08:02.482 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:02.482 EAL: Selected IOVA mode 'PA' 00:08:02.742 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:02.742 00:08:02.742 00:08:02.742 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.742 http://cunit.sourceforge.net/ 00:08:02.742 00:08:02.742 00:08:02.742 Suite: memory 00:08:02.742 Test: test ... 00:08:02.742 register 0x200000200000 2097152 00:08:02.742 malloc 3145728 00:08:02.742 register 0x200000400000 4194304 00:08:02.742 buf 0x2000004fffc0 len 3145728 PASSED 00:08:02.742 malloc 64 00:08:02.742 buf 0x2000004ffec0 len 64 PASSED 00:08:02.742 malloc 4194304 00:08:02.742 register 0x200000800000 6291456 00:08:02.742 buf 0x2000009fffc0 len 4194304 PASSED 00:08:02.742 free 0x2000004fffc0 3145728 00:08:02.742 free 0x2000004ffec0 64 00:08:02.742 unregister 0x200000400000 4194304 PASSED 00:08:02.742 free 0x2000009fffc0 4194304 00:08:02.742 unregister 0x200000800000 6291456 PASSED 00:08:02.742 malloc 8388608 00:08:02.742 register 0x200000400000 10485760 00:08:02.742 buf 0x2000005fffc0 len 8388608 PASSED 00:08:02.742 free 0x2000005fffc0 8388608 00:08:02.742 unregister 0x200000400000 10485760 PASSED 00:08:02.742 passed 00:08:02.742 00:08:02.742 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.742 suites 1 1 n/a 0 0 00:08:02.742 tests 1 1 1 0 0 00:08:02.742 asserts 15 15 15 0 n/a 00:08:02.742 00:08:02.742 Elapsed time = 0.083 seconds 00:08:02.742 00:08:02.742 real 0m0.298s 00:08:02.742 user 0m0.115s 00:08:02.742 sys 0m0.078s 00:08:02.742 10:13:09 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.742 10:13:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:02.742 ************************************ 00:08:02.742 END TEST env_mem_callbacks 00:08:02.742 ************************************ 00:08:02.742 00:08:02.742 real 0m10.189s 00:08:02.742 user 0m8.260s 00:08:02.742 sys 0m1.559s 00:08:02.742 10:13:09 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.742 10:13:09 env -- common/autotest_common.sh@10 -- # set +x 00:08:02.742 ************************************ 00:08:02.742 END TEST env 00:08:02.742 ************************************ 00:08:03.002 10:13:09 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:03.002 10:13:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.002 10:13:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.002 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:08:03.002 ************************************ 00:08:03.002 START TEST rpc 00:08:03.002 ************************************ 00:08:03.002 10:13:09 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:03.002 * Looking for test storage... 00:08:03.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:03.002 10:13:10 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:03.002 10:13:10 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:03.002 10:13:10 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:03.002 10:13:10 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:03.002 10:13:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.002 10:13:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.002 10:13:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.002 10:13:10 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.002 10:13:10 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.002 10:13:10 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.002 10:13:10 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.002 10:13:10 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.002 10:13:10 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.002 10:13:10 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.002 10:13:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.002 10:13:10 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:03.002 10:13:10 rpc -- scripts/common.sh@345 -- # : 1 00:08:03.002 10:13:10 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.002 10:13:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.002 10:13:10 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:03.002 10:13:10 rpc -- scripts/common.sh@353 -- # local d=1 00:08:03.002 10:13:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.002 10:13:10 rpc -- scripts/common.sh@355 -- # echo 1 00:08:03.002 10:13:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.002 10:13:10 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:03.269 10:13:10 rpc -- scripts/common.sh@353 -- # local d=2 00:08:03.269 10:13:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.269 10:13:10 rpc -- scripts/common.sh@355 -- # echo 2 00:08:03.270 10:13:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.270 10:13:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.270 10:13:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.270 10:13:10 rpc -- scripts/common.sh@368 -- # return 0 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.270 --rc genhtml_branch_coverage=1 00:08:03.270 --rc genhtml_function_coverage=1 00:08:03.270 --rc genhtml_legend=1 00:08:03.270 --rc geninfo_all_blocks=1 00:08:03.270 --rc geninfo_unexecuted_blocks=1 00:08:03.270 00:08:03.270 ' 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.270 --rc genhtml_branch_coverage=1 00:08:03.270 --rc genhtml_function_coverage=1 00:08:03.270 --rc genhtml_legend=1 00:08:03.270 --rc geninfo_all_blocks=1 00:08:03.270 --rc geninfo_unexecuted_blocks=1 00:08:03.270 00:08:03.270 ' 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.270 --rc genhtml_branch_coverage=1 00:08:03.270 --rc genhtml_function_coverage=1 00:08:03.270 --rc genhtml_legend=1 00:08:03.270 --rc geninfo_all_blocks=1 00:08:03.270 --rc geninfo_unexecuted_blocks=1 00:08:03.270 00:08:03.270 ' 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:03.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.270 --rc genhtml_branch_coverage=1 00:08:03.270 --rc genhtml_function_coverage=1 00:08:03.270 --rc genhtml_legend=1 00:08:03.270 --rc geninfo_all_blocks=1 00:08:03.270 --rc geninfo_unexecuted_blocks=1 00:08:03.270 00:08:03.270 ' 00:08:03.270 10:13:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57762 00:08:03.270 10:13:10 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:03.270 10:13:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:03.270 10:13:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57762 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@835 -- # '[' -z 57762 ']' 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.270 10:13:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.270 [2024-11-25 10:13:10.231431] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:03.270 [2024-11-25 10:13:10.231802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57762 ] 00:08:03.540 [2024-11-25 10:13:10.416289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.540 [2024-11-25 10:13:10.535716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:03.540 [2024-11-25 10:13:10.535783] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57762' to capture a snapshot of events at runtime. 00:08:03.540 [2024-11-25 10:13:10.535797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.540 [2024-11-25 10:13:10.535811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.540 [2024-11-25 10:13:10.535822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57762 for offline analysis/debug. 00:08:03.540 [2024-11-25 10:13:10.537175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.480 10:13:11 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.480 10:13:11 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:04.480 10:13:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:04.480 10:13:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:04.480 10:13:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:04.480 10:13:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:04.480 10:13:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.480 10:13:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.480 10:13:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.480 ************************************ 00:08:04.480 START TEST rpc_integrity 00:08:04.480 ************************************ 00:08:04.480 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:04.480 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:04.480 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.480 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.480 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.480 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:04.480 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:04.480 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:04.480 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:04.480 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.480 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.480 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.480 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:04.480 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:04.480 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.480 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.480 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.480 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:04.480 { 00:08:04.480 "name": "Malloc0", 00:08:04.480 "aliases": [ 00:08:04.480 "1957cb7c-3881-4e31-8187-60a7012c73f0" 00:08:04.480 ], 00:08:04.480 "product_name": "Malloc disk", 00:08:04.480 "block_size": 512, 00:08:04.480 "num_blocks": 16384, 00:08:04.480 "uuid": "1957cb7c-3881-4e31-8187-60a7012c73f0", 00:08:04.480 "assigned_rate_limits": { 00:08:04.480 "rw_ios_per_sec": 0, 00:08:04.480 "rw_mbytes_per_sec": 0, 00:08:04.480 "r_mbytes_per_sec": 0, 00:08:04.480 "w_mbytes_per_sec": 0 00:08:04.480 }, 00:08:04.480 "claimed": false, 00:08:04.480 "zoned": false, 00:08:04.480 "supported_io_types": { 00:08:04.480 "read": true, 00:08:04.480 "write": true, 00:08:04.480 "unmap": true, 00:08:04.480 "flush": true, 00:08:04.480 "reset": true, 00:08:04.480 "nvme_admin": false, 00:08:04.480 "nvme_io": false, 00:08:04.481 "nvme_io_md": false, 00:08:04.481 "write_zeroes": true, 00:08:04.481 "zcopy": true, 00:08:04.481 "get_zone_info": false, 00:08:04.481 "zone_management": false, 00:08:04.481 "zone_append": false, 00:08:04.481 "compare": false, 00:08:04.481 "compare_and_write": false, 00:08:04.481 "abort": true, 00:08:04.481 "seek_hole": false, 00:08:04.481 "seek_data": false, 00:08:04.481 "copy": true, 00:08:04.481 "nvme_iov_md": false 00:08:04.481 }, 00:08:04.481 "memory_domains": [ 00:08:04.481 { 00:08:04.481 "dma_device_id": "system", 00:08:04.481 "dma_device_type": 1 00:08:04.481 }, 00:08:04.481 { 00:08:04.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.481 "dma_device_type": 2 00:08:04.481 } 00:08:04.481 ], 00:08:04.481 "driver_specific": {} 00:08:04.481 } 00:08:04.481 ]' 00:08:04.481 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.741 [2024-11-25 10:13:11.602687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:04.741 [2024-11-25 10:13:11.602909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.741 [2024-11-25 10:13:11.602980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:04.741 [2024-11-25 10:13:11.603064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.741 [2024-11-25 10:13:11.605724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.741 [2024-11-25 10:13:11.605774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:04.741 Passthru0 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:04.741 { 00:08:04.741 "name": "Malloc0", 00:08:04.741 "aliases": [ 00:08:04.741 "1957cb7c-3881-4e31-8187-60a7012c73f0" 00:08:04.741 ], 00:08:04.741 "product_name": "Malloc disk", 00:08:04.741 "block_size": 512, 00:08:04.741 "num_blocks": 16384, 00:08:04.741 "uuid": "1957cb7c-3881-4e31-8187-60a7012c73f0", 00:08:04.741 "assigned_rate_limits": { 00:08:04.741 "rw_ios_per_sec": 0, 00:08:04.741 "rw_mbytes_per_sec": 0, 00:08:04.741 "r_mbytes_per_sec": 0, 00:08:04.741 "w_mbytes_per_sec": 0 00:08:04.741 }, 00:08:04.741 "claimed": true, 00:08:04.741 "claim_type": "exclusive_write", 00:08:04.741 "zoned": false, 00:08:04.741 "supported_io_types": { 00:08:04.741 "read": true, 00:08:04.741 "write": true, 00:08:04.741 "unmap": true, 00:08:04.741 "flush": true, 00:08:04.741 "reset": true, 00:08:04.741 "nvme_admin": false, 00:08:04.741 "nvme_io": false, 00:08:04.741 "nvme_io_md": false, 00:08:04.741 "write_zeroes": true, 00:08:04.741 "zcopy": true, 00:08:04.741 "get_zone_info": false, 00:08:04.741 "zone_management": false, 00:08:04.741 "zone_append": false, 00:08:04.741 "compare": false, 00:08:04.741 "compare_and_write": false, 00:08:04.741 "abort": true, 00:08:04.741 "seek_hole": false, 00:08:04.741 "seek_data": false, 00:08:04.741 "copy": true, 00:08:04.741 "nvme_iov_md": false 00:08:04.741 }, 00:08:04.741 "memory_domains": [ 00:08:04.741 { 00:08:04.741 "dma_device_id": "system", 00:08:04.741 "dma_device_type": 1 00:08:04.741 }, 00:08:04.741 { 00:08:04.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.741 "dma_device_type": 2 00:08:04.741 } 00:08:04.741 ], 00:08:04.741 "driver_specific": {} 00:08:04.741 }, 00:08:04.741 { 00:08:04.741 "name": "Passthru0", 00:08:04.741 "aliases": [ 00:08:04.741 "5d78b797-7242-53a0-af0d-a851d6cd4900" 00:08:04.741 ], 00:08:04.741 "product_name": "passthru", 00:08:04.741 "block_size": 512, 00:08:04.741 "num_blocks": 16384, 00:08:04.741 "uuid": "5d78b797-7242-53a0-af0d-a851d6cd4900", 00:08:04.741 "assigned_rate_limits": { 00:08:04.741 "rw_ios_per_sec": 0, 00:08:04.741 "rw_mbytes_per_sec": 0, 00:08:04.741 "r_mbytes_per_sec": 0, 00:08:04.741 "w_mbytes_per_sec": 0 00:08:04.741 }, 00:08:04.741 "claimed": false, 00:08:04.741 "zoned": false, 00:08:04.741 "supported_io_types": { 00:08:04.741 "read": true, 00:08:04.741 "write": true, 00:08:04.741 "unmap": true, 00:08:04.741 "flush": true, 00:08:04.741 "reset": true, 00:08:04.741 "nvme_admin": false, 00:08:04.741 "nvme_io": false, 00:08:04.741 "nvme_io_md": false, 00:08:04.741 "write_zeroes": true, 00:08:04.741 "zcopy": true, 00:08:04.741 "get_zone_info": false, 00:08:04.741 "zone_management": false, 00:08:04.741 "zone_append": false, 00:08:04.741 "compare": false, 00:08:04.741 "compare_and_write": false, 00:08:04.741 "abort": true, 00:08:04.741 "seek_hole": false, 00:08:04.741 "seek_data": false, 00:08:04.741 "copy": true, 00:08:04.741 "nvme_iov_md": false 00:08:04.741 }, 00:08:04.741 "memory_domains": [ 00:08:04.741 { 00:08:04.741 "dma_device_id": "system", 00:08:04.741 "dma_device_type": 1 00:08:04.741 }, 00:08:04.741 { 00:08:04.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.741 "dma_device_type": 2 00:08:04.741 } 00:08:04.741 ], 00:08:04.741 "driver_specific": { 00:08:04.741 "passthru": { 00:08:04.741 "name": "Passthru0", 00:08:04.741 "base_bdev_name": "Malloc0" 00:08:04.741 } 00:08:04.741 } 00:08:04.741 } 00:08:04.741 ]' 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:04.741 ************************************ 00:08:04.741 END TEST rpc_integrity 00:08:04.741 ************************************ 00:08:04.741 10:13:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:04.741 00:08:04.741 real 0m0.354s 00:08:04.741 user 0m0.187s 00:08:04.741 sys 0m0.059s 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.741 10:13:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:04.741 10:13:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:04.741 10:13:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.741 10:13:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.741 10:13:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.000 ************************************ 00:08:05.000 START TEST rpc_plugins 00:08:05.000 ************************************ 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:05.001 10:13:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.001 10:13:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:05.001 10:13:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.001 10:13:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:05.001 { 00:08:05.001 "name": "Malloc1", 00:08:05.001 "aliases": [ 00:08:05.001 "8403bf10-a161-46c8-9dcd-95f679d4c320" 00:08:05.001 ], 00:08:05.001 "product_name": "Malloc disk", 00:08:05.001 "block_size": 4096, 00:08:05.001 "num_blocks": 256, 00:08:05.001 "uuid": "8403bf10-a161-46c8-9dcd-95f679d4c320", 00:08:05.001 "assigned_rate_limits": { 00:08:05.001 "rw_ios_per_sec": 0, 00:08:05.001 "rw_mbytes_per_sec": 0, 00:08:05.001 "r_mbytes_per_sec": 0, 00:08:05.001 "w_mbytes_per_sec": 0 00:08:05.001 }, 00:08:05.001 "claimed": false, 00:08:05.001 "zoned": false, 00:08:05.001 "supported_io_types": { 00:08:05.001 "read": true, 00:08:05.001 "write": true, 00:08:05.001 "unmap": true, 00:08:05.001 "flush": true, 00:08:05.001 "reset": true, 00:08:05.001 "nvme_admin": false, 00:08:05.001 "nvme_io": false, 00:08:05.001 "nvme_io_md": false, 00:08:05.001 "write_zeroes": true, 00:08:05.001 "zcopy": true, 00:08:05.001 "get_zone_info": false, 00:08:05.001 "zone_management": false, 00:08:05.001 "zone_append": false, 00:08:05.001 "compare": false, 00:08:05.001 "compare_and_write": false, 00:08:05.001 "abort": true, 00:08:05.001 "seek_hole": false, 00:08:05.001 "seek_data": false, 00:08:05.001 "copy": true, 00:08:05.001 "nvme_iov_md": false 00:08:05.001 }, 00:08:05.001 "memory_domains": [ 00:08:05.001 { 00:08:05.001 "dma_device_id": "system", 00:08:05.001 "dma_device_type": 1 00:08:05.001 }, 00:08:05.001 { 00:08:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.001 "dma_device_type": 2 00:08:05.001 } 00:08:05.001 ], 00:08:05.001 "driver_specific": {} 00:08:05.001 } 00:08:05.001 ]' 00:08:05.001 10:13:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:05.001 10:13:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:05.001 10:13:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.001 10:13:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.001 10:13:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.001 10:13:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.001 10:13:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:05.001 10:13:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:05.001 ************************************ 00:08:05.001 END TEST rpc_plugins 00:08:05.001 ************************************ 00:08:05.001 10:13:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:05.001 00:08:05.001 real 0m0.190s 00:08:05.001 user 0m0.108s 00:08:05.001 sys 0m0.034s 00:08:05.001 10:13:12 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.001 10:13:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:05.001 10:13:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:05.001 10:13:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.001 10:13:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.001 10:13:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.259 ************************************ 00:08:05.259 START TEST rpc_trace_cmd_test 00:08:05.259 ************************************ 00:08:05.259 10:13:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:05.259 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:05.259 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:05.259 10:13:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.259 10:13:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.259 10:13:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.259 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:05.259 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57762", 00:08:05.259 "tpoint_group_mask": "0x8", 00:08:05.259 "iscsi_conn": { 00:08:05.259 "mask": "0x2", 00:08:05.259 "tpoint_mask": "0x0" 00:08:05.259 }, 00:08:05.259 "scsi": { 00:08:05.259 "mask": "0x4", 00:08:05.259 "tpoint_mask": "0x0" 00:08:05.259 }, 00:08:05.260 "bdev": { 00:08:05.260 "mask": "0x8", 00:08:05.260 "tpoint_mask": "0xffffffffffffffff" 00:08:05.260 }, 00:08:05.260 "nvmf_rdma": { 00:08:05.260 "mask": "0x10", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "nvmf_tcp": { 00:08:05.260 "mask": "0x20", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "ftl": { 00:08:05.260 "mask": "0x40", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "blobfs": { 00:08:05.260 "mask": "0x80", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "dsa": { 00:08:05.260 "mask": "0x200", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "thread": { 00:08:05.260 "mask": "0x400", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "nvme_pcie": { 00:08:05.260 "mask": "0x800", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "iaa": { 00:08:05.260 "mask": "0x1000", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "nvme_tcp": { 00:08:05.260 "mask": "0x2000", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "bdev_nvme": { 00:08:05.260 "mask": "0x4000", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "sock": { 00:08:05.260 "mask": "0x8000", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "blob": { 00:08:05.260 "mask": "0x10000", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "bdev_raid": { 00:08:05.260 "mask": "0x20000", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 }, 00:08:05.260 "scheduler": { 00:08:05.260 "mask": "0x40000", 00:08:05.260 "tpoint_mask": "0x0" 00:08:05.260 } 00:08:05.260 }' 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:05.260 ************************************ 00:08:05.260 END TEST rpc_trace_cmd_test 00:08:05.260 ************************************ 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:05.260 00:08:05.260 real 0m0.237s 00:08:05.260 user 0m0.188s 00:08:05.260 sys 0m0.039s 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.260 10:13:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 10:13:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:05.520 10:13:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:05.520 10:13:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:05.520 10:13:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.520 10:13:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.520 10:13:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 ************************************ 00:08:05.520 START TEST rpc_daemon_integrity 00:08:05.520 ************************************ 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:05.520 { 00:08:05.520 "name": "Malloc2", 00:08:05.520 "aliases": [ 00:08:05.520 "311edf78-3582-4b06-9c55-0aa2d684fa49" 00:08:05.520 ], 00:08:05.520 "product_name": "Malloc disk", 00:08:05.520 "block_size": 512, 00:08:05.520 "num_blocks": 16384, 00:08:05.520 "uuid": "311edf78-3582-4b06-9c55-0aa2d684fa49", 00:08:05.520 "assigned_rate_limits": { 00:08:05.520 "rw_ios_per_sec": 0, 00:08:05.520 "rw_mbytes_per_sec": 0, 00:08:05.520 "r_mbytes_per_sec": 0, 00:08:05.520 "w_mbytes_per_sec": 0 00:08:05.520 }, 00:08:05.520 "claimed": false, 00:08:05.520 "zoned": false, 00:08:05.520 "supported_io_types": { 00:08:05.520 "read": true, 00:08:05.520 "write": true, 00:08:05.520 "unmap": true, 00:08:05.520 "flush": true, 00:08:05.520 "reset": true, 00:08:05.520 "nvme_admin": false, 00:08:05.520 "nvme_io": false, 00:08:05.520 "nvme_io_md": false, 00:08:05.520 "write_zeroes": true, 00:08:05.520 "zcopy": true, 00:08:05.520 "get_zone_info": false, 00:08:05.520 "zone_management": false, 00:08:05.520 "zone_append": false, 00:08:05.520 "compare": false, 00:08:05.520 "compare_and_write": false, 00:08:05.520 "abort": true, 00:08:05.520 "seek_hole": false, 00:08:05.520 "seek_data": false, 00:08:05.520 "copy": true, 00:08:05.520 "nvme_iov_md": false 00:08:05.520 }, 00:08:05.520 "memory_domains": [ 00:08:05.520 { 00:08:05.520 "dma_device_id": "system", 00:08:05.520 "dma_device_type": 1 00:08:05.520 }, 00:08:05.520 { 00:08:05.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.520 "dma_device_type": 2 00:08:05.520 } 00:08:05.520 ], 00:08:05.520 "driver_specific": {} 00:08:05.520 } 00:08:05.520 ]' 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 [2024-11-25 10:13:12.580838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:05.520 [2024-11-25 10:13:12.581171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.520 [2024-11-25 10:13:12.581215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:05.520 [2024-11-25 10:13:12.581232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.520 [2024-11-25 10:13:12.584360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.520 [2024-11-25 10:13:12.584408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:05.520 Passthru0 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.520 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:05.520 { 00:08:05.520 "name": "Malloc2", 00:08:05.521 "aliases": [ 00:08:05.521 "311edf78-3582-4b06-9c55-0aa2d684fa49" 00:08:05.521 ], 00:08:05.521 "product_name": "Malloc disk", 00:08:05.521 "block_size": 512, 00:08:05.521 "num_blocks": 16384, 00:08:05.521 "uuid": "311edf78-3582-4b06-9c55-0aa2d684fa49", 00:08:05.521 "assigned_rate_limits": { 00:08:05.521 "rw_ios_per_sec": 0, 00:08:05.521 "rw_mbytes_per_sec": 0, 00:08:05.521 "r_mbytes_per_sec": 0, 00:08:05.521 "w_mbytes_per_sec": 0 00:08:05.521 }, 00:08:05.521 "claimed": true, 00:08:05.521 "claim_type": "exclusive_write", 00:08:05.521 "zoned": false, 00:08:05.521 "supported_io_types": { 00:08:05.521 "read": true, 00:08:05.521 "write": true, 00:08:05.521 "unmap": true, 00:08:05.521 "flush": true, 00:08:05.521 "reset": true, 00:08:05.521 "nvme_admin": false, 00:08:05.521 "nvme_io": false, 00:08:05.521 "nvme_io_md": false, 00:08:05.521 "write_zeroes": true, 00:08:05.521 "zcopy": true, 00:08:05.521 "get_zone_info": false, 00:08:05.521 "zone_management": false, 00:08:05.521 "zone_append": false, 00:08:05.521 "compare": false, 00:08:05.521 "compare_and_write": false, 00:08:05.521 "abort": true, 00:08:05.521 "seek_hole": false, 00:08:05.521 "seek_data": false, 00:08:05.521 "copy": true, 00:08:05.521 "nvme_iov_md": false 00:08:05.521 }, 00:08:05.521 "memory_domains": [ 00:08:05.521 { 00:08:05.521 "dma_device_id": "system", 00:08:05.521 "dma_device_type": 1 00:08:05.521 }, 00:08:05.521 { 00:08:05.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.521 "dma_device_type": 2 00:08:05.521 } 00:08:05.521 ], 00:08:05.521 "driver_specific": {} 00:08:05.521 }, 00:08:05.521 { 00:08:05.521 "name": "Passthru0", 00:08:05.521 "aliases": [ 00:08:05.521 "847f3f87-7fac-50d2-b5ad-54b72b4f03e7" 00:08:05.521 ], 00:08:05.521 "product_name": "passthru", 00:08:05.521 "block_size": 512, 00:08:05.521 "num_blocks": 16384, 00:08:05.521 "uuid": "847f3f87-7fac-50d2-b5ad-54b72b4f03e7", 00:08:05.521 "assigned_rate_limits": { 00:08:05.521 "rw_ios_per_sec": 0, 00:08:05.521 "rw_mbytes_per_sec": 0, 00:08:05.521 "r_mbytes_per_sec": 0, 00:08:05.521 "w_mbytes_per_sec": 0 00:08:05.521 }, 00:08:05.521 "claimed": false, 00:08:05.521 "zoned": false, 00:08:05.521 "supported_io_types": { 00:08:05.521 "read": true, 00:08:05.521 "write": true, 00:08:05.521 "unmap": true, 00:08:05.521 "flush": true, 00:08:05.521 "reset": true, 00:08:05.521 "nvme_admin": false, 00:08:05.521 "nvme_io": false, 00:08:05.521 "nvme_io_md": false, 00:08:05.521 "write_zeroes": true, 00:08:05.521 "zcopy": true, 00:08:05.521 "get_zone_info": false, 00:08:05.521 "zone_management": false, 00:08:05.521 "zone_append": false, 00:08:05.521 "compare": false, 00:08:05.521 "compare_and_write": false, 00:08:05.521 "abort": true, 00:08:05.521 "seek_hole": false, 00:08:05.521 "seek_data": false, 00:08:05.521 "copy": true, 00:08:05.521 "nvme_iov_md": false 00:08:05.521 }, 00:08:05.521 "memory_domains": [ 00:08:05.521 { 00:08:05.521 "dma_device_id": "system", 00:08:05.521 "dma_device_type": 1 00:08:05.521 }, 00:08:05.521 { 00:08:05.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.521 "dma_device_type": 2 00:08:05.521 } 00:08:05.521 ], 00:08:05.521 "driver_specific": { 00:08:05.521 "passthru": { 00:08:05.521 "name": "Passthru0", 00:08:05.521 "base_bdev_name": "Malloc2" 00:08:05.521 } 00:08:05.521 } 00:08:05.521 } 00:08:05.521 ]' 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:05.794 ************************************ 00:08:05.794 END TEST rpc_daemon_integrity 00:08:05.794 ************************************ 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:05.794 00:08:05.794 real 0m0.377s 00:08:05.794 user 0m0.202s 00:08:05.794 sys 0m0.065s 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.794 10:13:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:05.794 10:13:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:05.794 10:13:12 rpc -- rpc/rpc.sh@84 -- # killprocess 57762 00:08:05.794 10:13:12 rpc -- common/autotest_common.sh@954 -- # '[' -z 57762 ']' 00:08:05.794 10:13:12 rpc -- common/autotest_common.sh@958 -- # kill -0 57762 00:08:05.794 10:13:12 rpc -- common/autotest_common.sh@959 -- # uname 00:08:05.794 10:13:12 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.794 10:13:12 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57762 00:08:05.794 killing process with pid 57762 00:08:05.794 10:13:12 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:05.794 10:13:12 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:05.794 10:13:12 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57762' 00:08:05.794 10:13:12 rpc -- common/autotest_common.sh@973 -- # kill 57762 00:08:05.794 10:13:12 rpc -- common/autotest_common.sh@978 -- # wait 57762 00:08:09.084 00:08:09.084 real 0m5.601s 00:08:09.084 user 0m6.091s 00:08:09.084 sys 0m1.017s 00:08:09.084 10:13:15 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.084 10:13:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.084 ************************************ 00:08:09.084 END TEST rpc 00:08:09.084 ************************************ 00:08:09.084 10:13:15 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:09.084 10:13:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.084 10:13:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.084 10:13:15 -- common/autotest_common.sh@10 -- # set +x 00:08:09.084 ************************************ 00:08:09.084 START TEST skip_rpc 00:08:09.084 ************************************ 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:09.084 * Looking for test storage... 00:08:09.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.084 10:13:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:09.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.084 --rc genhtml_branch_coverage=1 00:08:09.084 --rc genhtml_function_coverage=1 00:08:09.084 --rc genhtml_legend=1 00:08:09.084 --rc geninfo_all_blocks=1 00:08:09.084 --rc geninfo_unexecuted_blocks=1 00:08:09.084 00:08:09.084 ' 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:09.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.084 --rc genhtml_branch_coverage=1 00:08:09.084 --rc genhtml_function_coverage=1 00:08:09.084 --rc genhtml_legend=1 00:08:09.084 --rc geninfo_all_blocks=1 00:08:09.084 --rc geninfo_unexecuted_blocks=1 00:08:09.084 00:08:09.084 ' 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:09.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.084 --rc genhtml_branch_coverage=1 00:08:09.084 --rc genhtml_function_coverage=1 00:08:09.084 --rc genhtml_legend=1 00:08:09.084 --rc geninfo_all_blocks=1 00:08:09.084 --rc geninfo_unexecuted_blocks=1 00:08:09.084 00:08:09.084 ' 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:09.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.084 --rc genhtml_branch_coverage=1 00:08:09.084 --rc genhtml_function_coverage=1 00:08:09.084 --rc genhtml_legend=1 00:08:09.084 --rc geninfo_all_blocks=1 00:08:09.084 --rc geninfo_unexecuted_blocks=1 00:08:09.084 00:08:09.084 ' 00:08:09.084 10:13:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:09.084 10:13:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:09.084 10:13:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.084 10:13:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.084 ************************************ 00:08:09.084 START TEST skip_rpc 00:08:09.084 ************************************ 00:08:09.084 10:13:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:09.084 10:13:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57996 00:08:09.084 10:13:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:09.084 10:13:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:09.084 10:13:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:09.084 [2024-11-25 10:13:15.924665] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:09.084 [2024-11-25 10:13:15.924797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57996 ] 00:08:09.084 [2024-11-25 10:13:16.107725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.343 [2024-11-25 10:13:16.223641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57996 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57996 ']' 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57996 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57996 00:08:14.652 killing process with pid 57996 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57996' 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57996 00:08:14.652 10:13:20 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57996 00:08:16.556 00:08:16.556 real 0m7.615s 00:08:16.556 user 0m7.019s 00:08:16.556 sys 0m0.515s 00:08:16.556 ************************************ 00:08:16.556 END TEST skip_rpc 00:08:16.556 ************************************ 00:08:16.556 10:13:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.556 10:13:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.556 10:13:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:16.556 10:13:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.556 10:13:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.556 10:13:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.556 ************************************ 00:08:16.556 START TEST skip_rpc_with_json 00:08:16.556 ************************************ 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58106 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58106 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58106 ']' 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.556 10:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:16.556 [2024-11-25 10:13:23.604282] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:16.556 [2024-11-25 10:13:23.604412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58106 ] 00:08:16.816 [2024-11-25 10:13:23.781557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.816 [2024-11-25 10:13:23.893925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:17.755 [2024-11-25 10:13:24.807427] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:17.755 request: 00:08:17.755 { 00:08:17.755 "trtype": "tcp", 00:08:17.755 "method": "nvmf_get_transports", 00:08:17.755 "req_id": 1 00:08:17.755 } 00:08:17.755 Got JSON-RPC error response 00:08:17.755 response: 00:08:17.755 { 00:08:17.755 "code": -19, 00:08:17.755 "message": "No such device" 00:08:17.755 } 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:17.755 [2024-11-25 10:13:24.823569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.755 10:13:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:18.120 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.120 10:13:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:18.120 { 00:08:18.120 "subsystems": [ 00:08:18.120 { 00:08:18.120 "subsystem": "fsdev", 00:08:18.120 "config": [ 00:08:18.120 { 00:08:18.120 "method": "fsdev_set_opts", 00:08:18.120 "params": { 00:08:18.120 "fsdev_io_pool_size": 65535, 00:08:18.120 "fsdev_io_cache_size": 256 00:08:18.120 } 00:08:18.120 } 00:08:18.120 ] 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "subsystem": "keyring", 00:08:18.120 "config": [] 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "subsystem": "iobuf", 00:08:18.120 "config": [ 00:08:18.120 { 00:08:18.120 "method": "iobuf_set_options", 00:08:18.120 "params": { 00:08:18.120 "small_pool_count": 8192, 00:08:18.120 "large_pool_count": 1024, 00:08:18.120 "small_bufsize": 8192, 00:08:18.120 "large_bufsize": 135168, 00:08:18.120 "enable_numa": false 00:08:18.120 } 00:08:18.120 } 00:08:18.120 ] 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "subsystem": "sock", 00:08:18.120 "config": [ 00:08:18.120 { 00:08:18.120 "method": "sock_set_default_impl", 00:08:18.120 "params": { 00:08:18.120 "impl_name": "posix" 00:08:18.120 } 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "method": "sock_impl_set_options", 00:08:18.120 "params": { 00:08:18.120 "impl_name": "ssl", 00:08:18.120 "recv_buf_size": 4096, 00:08:18.120 "send_buf_size": 4096, 00:08:18.120 "enable_recv_pipe": true, 00:08:18.120 "enable_quickack": false, 00:08:18.120 "enable_placement_id": 0, 00:08:18.120 "enable_zerocopy_send_server": true, 00:08:18.120 "enable_zerocopy_send_client": false, 00:08:18.120 "zerocopy_threshold": 0, 00:08:18.120 "tls_version": 0, 00:08:18.120 "enable_ktls": false 00:08:18.120 } 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "method": "sock_impl_set_options", 00:08:18.120 "params": { 00:08:18.120 "impl_name": "posix", 00:08:18.120 "recv_buf_size": 2097152, 00:08:18.120 "send_buf_size": 2097152, 00:08:18.120 "enable_recv_pipe": true, 00:08:18.120 "enable_quickack": false, 00:08:18.120 "enable_placement_id": 0, 00:08:18.120 "enable_zerocopy_send_server": true, 00:08:18.120 "enable_zerocopy_send_client": false, 00:08:18.120 "zerocopy_threshold": 0, 00:08:18.120 "tls_version": 0, 00:08:18.120 "enable_ktls": false 00:08:18.120 } 00:08:18.120 } 00:08:18.120 ] 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "subsystem": "vmd", 00:08:18.120 "config": [] 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "subsystem": "accel", 00:08:18.120 "config": [ 00:08:18.120 { 00:08:18.120 "method": "accel_set_options", 00:08:18.120 "params": { 00:08:18.120 "small_cache_size": 128, 00:08:18.120 "large_cache_size": 16, 00:08:18.120 "task_count": 2048, 00:08:18.120 "sequence_count": 2048, 00:08:18.120 "buf_count": 2048 00:08:18.120 } 00:08:18.120 } 00:08:18.120 ] 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "subsystem": "bdev", 00:08:18.120 "config": [ 00:08:18.120 { 00:08:18.120 "method": "bdev_set_options", 00:08:18.120 "params": { 00:08:18.120 "bdev_io_pool_size": 65535, 00:08:18.120 "bdev_io_cache_size": 256, 00:08:18.120 "bdev_auto_examine": true, 00:08:18.120 "iobuf_small_cache_size": 128, 00:08:18.120 "iobuf_large_cache_size": 16 00:08:18.120 } 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "method": "bdev_raid_set_options", 00:08:18.120 "params": { 00:08:18.120 "process_window_size_kb": 1024, 00:08:18.120 "process_max_bandwidth_mb_sec": 0 00:08:18.120 } 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "method": "bdev_iscsi_set_options", 00:08:18.120 "params": { 00:08:18.120 "timeout_sec": 30 00:08:18.120 } 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "method": "bdev_nvme_set_options", 00:08:18.120 "params": { 00:08:18.120 "action_on_timeout": "none", 00:08:18.120 "timeout_us": 0, 00:08:18.120 "timeout_admin_us": 0, 00:08:18.120 "keep_alive_timeout_ms": 10000, 00:08:18.120 "arbitration_burst": 0, 00:08:18.120 "low_priority_weight": 0, 00:08:18.120 "medium_priority_weight": 0, 00:08:18.120 "high_priority_weight": 0, 00:08:18.120 "nvme_adminq_poll_period_us": 10000, 00:08:18.120 "nvme_ioq_poll_period_us": 0, 00:08:18.120 "io_queue_requests": 0, 00:08:18.120 "delay_cmd_submit": true, 00:08:18.120 "transport_retry_count": 4, 00:08:18.120 "bdev_retry_count": 3, 00:08:18.120 "transport_ack_timeout": 0, 00:08:18.120 "ctrlr_loss_timeout_sec": 0, 00:08:18.120 "reconnect_delay_sec": 0, 00:08:18.120 "fast_io_fail_timeout_sec": 0, 00:08:18.120 "disable_auto_failback": false, 00:08:18.120 "generate_uuids": false, 00:08:18.120 "transport_tos": 0, 00:08:18.120 "nvme_error_stat": false, 00:08:18.120 "rdma_srq_size": 0, 00:08:18.120 "io_path_stat": false, 00:08:18.120 "allow_accel_sequence": false, 00:08:18.120 "rdma_max_cq_size": 0, 00:08:18.120 "rdma_cm_event_timeout_ms": 0, 00:08:18.120 "dhchap_digests": [ 00:08:18.120 "sha256", 00:08:18.120 "sha384", 00:08:18.120 "sha512" 00:08:18.120 ], 00:08:18.120 "dhchap_dhgroups": [ 00:08:18.120 "null", 00:08:18.120 "ffdhe2048", 00:08:18.120 "ffdhe3072", 00:08:18.120 "ffdhe4096", 00:08:18.120 "ffdhe6144", 00:08:18.120 "ffdhe8192" 00:08:18.120 ] 00:08:18.120 } 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "method": "bdev_nvme_set_hotplug", 00:08:18.120 "params": { 00:08:18.120 "period_us": 100000, 00:08:18.120 "enable": false 00:08:18.120 } 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "method": "bdev_wait_for_examine" 00:08:18.120 } 00:08:18.120 ] 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "subsystem": "scsi", 00:08:18.120 "config": null 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "subsystem": "scheduler", 00:08:18.120 "config": [ 00:08:18.120 { 00:08:18.120 "method": "framework_set_scheduler", 00:08:18.120 "params": { 00:08:18.120 "name": "static" 00:08:18.120 } 00:08:18.120 } 00:08:18.120 ] 00:08:18.120 }, 00:08:18.120 { 00:08:18.120 "subsystem": "vhost_scsi", 00:08:18.120 "config": [] 00:08:18.120 }, 00:08:18.120 { 00:08:18.121 "subsystem": "vhost_blk", 00:08:18.121 "config": [] 00:08:18.121 }, 00:08:18.121 { 00:08:18.121 "subsystem": "ublk", 00:08:18.121 "config": [] 00:08:18.121 }, 00:08:18.121 { 00:08:18.121 "subsystem": "nbd", 00:08:18.121 "config": [] 00:08:18.121 }, 00:08:18.121 { 00:08:18.121 "subsystem": "nvmf", 00:08:18.121 "config": [ 00:08:18.121 { 00:08:18.121 "method": "nvmf_set_config", 00:08:18.121 "params": { 00:08:18.121 "discovery_filter": "match_any", 00:08:18.121 "admin_cmd_passthru": { 00:08:18.121 "identify_ctrlr": false 00:08:18.121 }, 00:08:18.121 "dhchap_digests": [ 00:08:18.121 "sha256", 00:08:18.121 "sha384", 00:08:18.121 "sha512" 00:08:18.121 ], 00:08:18.121 "dhchap_dhgroups": [ 00:08:18.121 "null", 00:08:18.121 "ffdhe2048", 00:08:18.121 "ffdhe3072", 00:08:18.121 "ffdhe4096", 00:08:18.121 "ffdhe6144", 00:08:18.121 "ffdhe8192" 00:08:18.121 ] 00:08:18.121 } 00:08:18.121 }, 00:08:18.121 { 00:08:18.121 "method": "nvmf_set_max_subsystems", 00:08:18.121 "params": { 00:08:18.121 "max_subsystems": 1024 00:08:18.121 } 00:08:18.121 }, 00:08:18.121 { 00:08:18.121 "method": "nvmf_set_crdt", 00:08:18.121 "params": { 00:08:18.121 "crdt1": 0, 00:08:18.121 "crdt2": 0, 00:08:18.121 "crdt3": 0 00:08:18.121 } 00:08:18.121 }, 00:08:18.121 { 00:08:18.121 "method": "nvmf_create_transport", 00:08:18.121 "params": { 00:08:18.121 "trtype": "TCP", 00:08:18.121 "max_queue_depth": 128, 00:08:18.121 "max_io_qpairs_per_ctrlr": 127, 00:08:18.121 "in_capsule_data_size": 4096, 00:08:18.121 "max_io_size": 131072, 00:08:18.121 "io_unit_size": 131072, 00:08:18.121 "max_aq_depth": 128, 00:08:18.121 "num_shared_buffers": 511, 00:08:18.121 "buf_cache_size": 4294967295, 00:08:18.121 "dif_insert_or_strip": false, 00:08:18.121 "zcopy": false, 00:08:18.121 "c2h_success": true, 00:08:18.121 "sock_priority": 0, 00:08:18.121 "abort_timeout_sec": 1, 00:08:18.121 "ack_timeout": 0, 00:08:18.121 "data_wr_pool_size": 0 00:08:18.121 } 00:08:18.121 } 00:08:18.121 ] 00:08:18.121 }, 00:08:18.121 { 00:08:18.121 "subsystem": "iscsi", 00:08:18.121 "config": [ 00:08:18.121 { 00:08:18.121 "method": "iscsi_set_options", 00:08:18.121 "params": { 00:08:18.121 "node_base": "iqn.2016-06.io.spdk", 00:08:18.121 "max_sessions": 128, 00:08:18.121 "max_connections_per_session": 2, 00:08:18.121 "max_queue_depth": 64, 00:08:18.121 "default_time2wait": 2, 00:08:18.121 "default_time2retain": 20, 00:08:18.121 "first_burst_length": 8192, 00:08:18.121 "immediate_data": true, 00:08:18.121 "allow_duplicated_isid": false, 00:08:18.121 "error_recovery_level": 0, 00:08:18.121 "nop_timeout": 60, 00:08:18.121 "nop_in_interval": 30, 00:08:18.121 "disable_chap": false, 00:08:18.121 "require_chap": false, 00:08:18.121 "mutual_chap": false, 00:08:18.121 "chap_group": 0, 00:08:18.121 "max_large_datain_per_connection": 64, 00:08:18.121 "max_r2t_per_connection": 4, 00:08:18.121 "pdu_pool_size": 36864, 00:08:18.121 "immediate_data_pool_size": 16384, 00:08:18.121 "data_out_pool_size": 2048 00:08:18.121 } 00:08:18.121 } 00:08:18.121 ] 00:08:18.121 } 00:08:18.121 ] 00:08:18.121 } 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58106 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58106 ']' 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58106 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58106 00:08:18.121 killing process with pid 58106 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58106' 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58106 00:08:18.121 10:13:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58106 00:08:20.706 10:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58162 00:08:20.706 10:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:20.706 10:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58162 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58162 ']' 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58162 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58162 00:08:25.987 killing process with pid 58162 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58162' 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58162 00:08:25.987 10:13:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58162 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:27.893 00:08:27.893 real 0m11.414s 00:08:27.893 user 0m10.864s 00:08:27.893 sys 0m0.889s 00:08:27.893 ************************************ 00:08:27.893 END TEST skip_rpc_with_json 00:08:27.893 ************************************ 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:27.893 10:13:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:27.893 10:13:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.893 10:13:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.893 10:13:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.893 ************************************ 00:08:27.893 START TEST skip_rpc_with_delay 00:08:27.893 ************************************ 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:27.893 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.894 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:27.894 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:27.894 10:13:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:28.152 [2024-11-25 10:13:35.097968] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:28.152 ************************************ 00:08:28.152 END TEST skip_rpc_with_delay 00:08:28.152 ************************************ 00:08:28.152 10:13:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:28.152 10:13:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:28.152 10:13:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:28.152 10:13:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:28.152 00:08:28.152 real 0m0.181s 00:08:28.152 user 0m0.076s 00:08:28.152 sys 0m0.103s 00:08:28.152 10:13:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.152 10:13:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:28.152 10:13:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:28.152 10:13:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:28.152 10:13:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:28.152 10:13:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.152 10:13:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.152 10:13:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.152 ************************************ 00:08:28.152 START TEST exit_on_failed_rpc_init 00:08:28.152 ************************************ 00:08:28.152 10:13:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:28.152 10:13:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58290 00:08:28.152 10:13:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:28.152 10:13:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58290 00:08:28.152 10:13:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58290 ']' 00:08:28.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.152 10:13:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.152 10:13:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.152 10:13:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.152 10:13:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.152 10:13:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:28.411 [2024-11-25 10:13:35.345663] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:28.412 [2024-11-25 10:13:35.346364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58290 ] 00:08:28.412 [2024-11-25 10:13:35.514585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.671 [2024-11-25 10:13:35.625147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:29.609 10:13:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:29.609 [2024-11-25 10:13:36.590894] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:29.609 [2024-11-25 10:13:36.591028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58308 ] 00:08:29.874 [2024-11-25 10:13:36.769178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.874 [2024-11-25 10:13:36.880442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.874 [2024-11-25 10:13:36.880719] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:29.874 [2024-11-25 10:13:36.880745] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:29.874 [2024-11-25 10:13:36.880765] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58290 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58290 ']' 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58290 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58290 00:08:30.133 killing process with pid 58290 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.133 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.134 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58290' 00:08:30.134 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58290 00:08:30.134 10:13:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58290 00:08:32.684 00:08:32.684 real 0m4.357s 00:08:32.684 user 0m4.672s 00:08:32.684 sys 0m0.580s 00:08:32.684 10:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.684 ************************************ 00:08:32.684 END TEST exit_on_failed_rpc_init 00:08:32.684 ************************************ 00:08:32.684 10:13:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:32.684 10:13:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:32.684 00:08:32.684 real 0m24.090s 00:08:32.684 user 0m22.842s 00:08:32.684 sys 0m2.403s 00:08:32.684 10:13:39 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.684 10:13:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.684 ************************************ 00:08:32.684 END TEST skip_rpc 00:08:32.684 ************************************ 00:08:32.684 10:13:39 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:32.684 10:13:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.684 10:13:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.684 10:13:39 -- common/autotest_common.sh@10 -- # set +x 00:08:32.684 ************************************ 00:08:32.684 START TEST rpc_client 00:08:32.684 ************************************ 00:08:32.684 10:13:39 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:32.941 * Looking for test storage... 00:08:32.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:32.941 10:13:39 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:32.941 10:13:39 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:08:32.941 10:13:39 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:32.941 10:13:39 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.941 10:13:39 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:32.941 10:13:39 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.941 10:13:39 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:32.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.941 --rc genhtml_branch_coverage=1 00:08:32.941 --rc genhtml_function_coverage=1 00:08:32.941 --rc genhtml_legend=1 00:08:32.941 --rc geninfo_all_blocks=1 00:08:32.941 --rc geninfo_unexecuted_blocks=1 00:08:32.941 00:08:32.941 ' 00:08:32.941 10:13:39 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:32.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.941 --rc genhtml_branch_coverage=1 00:08:32.941 --rc genhtml_function_coverage=1 00:08:32.941 --rc genhtml_legend=1 00:08:32.941 --rc geninfo_all_blocks=1 00:08:32.941 --rc geninfo_unexecuted_blocks=1 00:08:32.941 00:08:32.941 ' 00:08:32.941 10:13:39 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:32.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.941 --rc genhtml_branch_coverage=1 00:08:32.941 --rc genhtml_function_coverage=1 00:08:32.941 --rc genhtml_legend=1 00:08:32.941 --rc geninfo_all_blocks=1 00:08:32.941 --rc geninfo_unexecuted_blocks=1 00:08:32.941 00:08:32.941 ' 00:08:32.941 10:13:39 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:32.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.941 --rc genhtml_branch_coverage=1 00:08:32.941 --rc genhtml_function_coverage=1 00:08:32.941 --rc genhtml_legend=1 00:08:32.941 --rc geninfo_all_blocks=1 00:08:32.941 --rc geninfo_unexecuted_blocks=1 00:08:32.941 00:08:32.941 ' 00:08:32.941 10:13:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:32.941 OK 00:08:32.941 10:13:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:32.941 00:08:32.941 real 0m0.313s 00:08:32.941 user 0m0.175s 00:08:32.941 sys 0m0.150s 00:08:32.941 10:13:40 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.941 10:13:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:32.941 ************************************ 00:08:32.941 END TEST rpc_client 00:08:32.941 ************************************ 00:08:33.201 10:13:40 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:33.201 10:13:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.201 10:13:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.201 10:13:40 -- common/autotest_common.sh@10 -- # set +x 00:08:33.201 ************************************ 00:08:33.201 START TEST json_config 00:08:33.201 ************************************ 00:08:33.201 10:13:40 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:33.201 10:13:40 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.201 10:13:40 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.201 10:13:40 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.201 10:13:40 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.201 10:13:40 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.201 10:13:40 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.201 10:13:40 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.201 10:13:40 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.201 10:13:40 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.201 10:13:40 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.201 10:13:40 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.201 10:13:40 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.201 10:13:40 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.201 10:13:40 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.201 10:13:40 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.201 10:13:40 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:33.201 10:13:40 json_config -- scripts/common.sh@345 -- # : 1 00:08:33.201 10:13:40 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.201 10:13:40 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.201 10:13:40 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:33.201 10:13:40 json_config -- scripts/common.sh@353 -- # local d=1 00:08:33.201 10:13:40 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.201 10:13:40 json_config -- scripts/common.sh@355 -- # echo 1 00:08:33.201 10:13:40 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.201 10:13:40 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:33.201 10:13:40 json_config -- scripts/common.sh@353 -- # local d=2 00:08:33.201 10:13:40 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.201 10:13:40 json_config -- scripts/common.sh@355 -- # echo 2 00:08:33.201 10:13:40 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.201 10:13:40 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.201 10:13:40 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.201 10:13:40 json_config -- scripts/common.sh@368 -- # return 0 00:08:33.201 10:13:40 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.201 10:13:40 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.201 --rc genhtml_branch_coverage=1 00:08:33.201 --rc genhtml_function_coverage=1 00:08:33.201 --rc genhtml_legend=1 00:08:33.201 --rc geninfo_all_blocks=1 00:08:33.201 --rc geninfo_unexecuted_blocks=1 00:08:33.201 00:08:33.201 ' 00:08:33.201 10:13:40 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.201 --rc genhtml_branch_coverage=1 00:08:33.201 --rc genhtml_function_coverage=1 00:08:33.201 --rc genhtml_legend=1 00:08:33.201 --rc geninfo_all_blocks=1 00:08:33.201 --rc geninfo_unexecuted_blocks=1 00:08:33.201 00:08:33.201 ' 00:08:33.201 10:13:40 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.201 --rc genhtml_branch_coverage=1 00:08:33.201 --rc genhtml_function_coverage=1 00:08:33.201 --rc genhtml_legend=1 00:08:33.201 --rc geninfo_all_blocks=1 00:08:33.201 --rc geninfo_unexecuted_blocks=1 00:08:33.201 00:08:33.201 ' 00:08:33.201 10:13:40 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.201 --rc genhtml_branch_coverage=1 00:08:33.201 --rc genhtml_function_coverage=1 00:08:33.201 --rc genhtml_legend=1 00:08:33.201 --rc geninfo_all_blocks=1 00:08:33.201 --rc geninfo_unexecuted_blocks=1 00:08:33.201 00:08:33.201 ' 00:08:33.201 10:13:40 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2f61114a-0326-46e8-aeb1-5f899d706120 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2f61114a-0326-46e8-aeb1-5f899d706120 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.201 10:13:40 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.201 10:13:40 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.462 10:13:40 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.462 10:13:40 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.462 10:13:40 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.462 10:13:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.462 10:13:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.462 10:13:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.462 10:13:40 json_config -- paths/export.sh@5 -- # export PATH 00:08:33.462 10:13:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.462 10:13:40 json_config -- nvmf/common.sh@51 -- # : 0 00:08:33.462 10:13:40 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.462 10:13:40 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.462 WARNING: No tests are enabled so not running JSON configuration tests 00:08:33.462 10:13:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.462 10:13:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.462 10:13:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.462 10:13:40 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.462 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.462 10:13:40 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.462 10:13:40 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.462 10:13:40 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.462 10:13:40 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:33.462 10:13:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:33.462 10:13:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:33.462 10:13:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:33.462 10:13:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:33.462 10:13:40 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:33.462 10:13:40 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:33.462 ************************************ 00:08:33.462 END TEST json_config 00:08:33.462 ************************************ 00:08:33.462 00:08:33.462 real 0m0.225s 00:08:33.462 user 0m0.131s 00:08:33.462 sys 0m0.091s 00:08:33.462 10:13:40 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.462 10:13:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:33.462 10:13:40 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:33.462 10:13:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.462 10:13:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.462 10:13:40 -- common/autotest_common.sh@10 -- # set +x 00:08:33.462 ************************************ 00:08:33.462 START TEST json_config_extra_key 00:08:33.462 ************************************ 00:08:33.462 10:13:40 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:33.462 10:13:40 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.462 10:13:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.462 10:13:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.462 10:13:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.462 10:13:40 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:33.462 10:13:40 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.462 10:13:40 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.462 --rc genhtml_branch_coverage=1 00:08:33.462 --rc genhtml_function_coverage=1 00:08:33.462 --rc genhtml_legend=1 00:08:33.462 --rc geninfo_all_blocks=1 00:08:33.462 --rc geninfo_unexecuted_blocks=1 00:08:33.462 00:08:33.462 ' 00:08:33.462 10:13:40 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.462 --rc genhtml_branch_coverage=1 00:08:33.462 --rc genhtml_function_coverage=1 00:08:33.462 --rc genhtml_legend=1 00:08:33.462 --rc geninfo_all_blocks=1 00:08:33.462 --rc geninfo_unexecuted_blocks=1 00:08:33.462 00:08:33.462 ' 00:08:33.462 10:13:40 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.462 --rc genhtml_branch_coverage=1 00:08:33.462 --rc genhtml_function_coverage=1 00:08:33.462 --rc genhtml_legend=1 00:08:33.462 --rc geninfo_all_blocks=1 00:08:33.462 --rc geninfo_unexecuted_blocks=1 00:08:33.462 00:08:33.462 ' 00:08:33.462 10:13:40 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.462 --rc genhtml_branch_coverage=1 00:08:33.462 --rc genhtml_function_coverage=1 00:08:33.462 --rc genhtml_legend=1 00:08:33.462 --rc geninfo_all_blocks=1 00:08:33.462 --rc geninfo_unexecuted_blocks=1 00:08:33.462 00:08:33.462 ' 00:08:33.462 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2f61114a-0326-46e8-aeb1-5f899d706120 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2f61114a-0326-46e8-aeb1-5f899d706120 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.463 10:13:40 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.463 10:13:40 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.463 10:13:40 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.463 10:13:40 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.463 10:13:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.463 10:13:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.463 10:13:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.463 10:13:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:33.463 10:13:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.463 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.463 10:13:40 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:33.723 INFO: launching applications... 00:08:33.723 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58518 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:33.723 Waiting for target to run... 00:08:33.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:33.723 10:13:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58518 /var/tmp/spdk_tgt.sock 00:08:33.723 10:13:40 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58518 ']' 00:08:33.723 10:13:40 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:33.723 10:13:40 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.723 10:13:40 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:33.723 10:13:40 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.723 10:13:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:33.723 [2024-11-25 10:13:40.701532] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:33.723 [2024-11-25 10:13:40.701669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58518 ] 00:08:34.291 [2024-11-25 10:13:41.097689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.291 [2024-11-25 10:13:41.204844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.861 00:08:34.861 INFO: shutting down applications... 00:08:34.861 10:13:41 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.861 10:13:41 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:34.861 10:13:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:34.861 10:13:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:34.861 10:13:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:34.861 10:13:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:34.861 10:13:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:34.861 10:13:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58518 ]] 00:08:34.861 10:13:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58518 00:08:34.861 10:13:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:34.861 10:13:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:34.861 10:13:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58518 00:08:34.861 10:13:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:35.430 10:13:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:35.430 10:13:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:35.430 10:13:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58518 00:08:35.430 10:13:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:35.999 10:13:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:35.999 10:13:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:35.999 10:13:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58518 00:08:35.999 10:13:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:36.567 10:13:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:36.567 10:13:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:36.567 10:13:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58518 00:08:36.567 10:13:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:36.826 10:13:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:36.826 10:13:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:36.826 10:13:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58518 00:08:36.826 10:13:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:37.395 10:13:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:37.395 10:13:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:37.395 10:13:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58518 00:08:37.395 10:13:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:37.979 10:13:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:37.979 10:13:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:37.979 10:13:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58518 00:08:37.979 10:13:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:37.979 SPDK target shutdown done 00:08:37.979 Success 00:08:37.979 10:13:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:37.979 10:13:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:37.979 10:13:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:37.979 10:13:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:37.979 00:08:37.979 real 0m4.564s 00:08:37.979 user 0m3.977s 00:08:37.979 sys 0m0.619s 00:08:37.979 10:13:44 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.979 10:13:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:37.979 ************************************ 00:08:37.979 END TEST json_config_extra_key 00:08:37.979 ************************************ 00:08:37.979 10:13:45 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:37.979 10:13:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.979 10:13:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.979 10:13:45 -- common/autotest_common.sh@10 -- # set +x 00:08:37.979 ************************************ 00:08:37.979 START TEST alias_rpc 00:08:37.979 ************************************ 00:08:37.979 10:13:45 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:38.239 * Looking for test storage... 00:08:38.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:38.239 10:13:45 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.239 10:13:45 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.239 10:13:45 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.239 10:13:45 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.239 10:13:45 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:38.239 10:13:45 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.239 10:13:45 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.239 --rc genhtml_branch_coverage=1 00:08:38.239 --rc genhtml_function_coverage=1 00:08:38.239 --rc genhtml_legend=1 00:08:38.239 --rc geninfo_all_blocks=1 00:08:38.239 --rc geninfo_unexecuted_blocks=1 00:08:38.239 00:08:38.239 ' 00:08:38.239 10:13:45 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.239 --rc genhtml_branch_coverage=1 00:08:38.239 --rc genhtml_function_coverage=1 00:08:38.239 --rc genhtml_legend=1 00:08:38.239 --rc geninfo_all_blocks=1 00:08:38.239 --rc geninfo_unexecuted_blocks=1 00:08:38.239 00:08:38.239 ' 00:08:38.239 10:13:45 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.239 --rc genhtml_branch_coverage=1 00:08:38.239 --rc genhtml_function_coverage=1 00:08:38.239 --rc genhtml_legend=1 00:08:38.239 --rc geninfo_all_blocks=1 00:08:38.239 --rc geninfo_unexecuted_blocks=1 00:08:38.239 00:08:38.239 ' 00:08:38.239 10:13:45 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.239 --rc genhtml_branch_coverage=1 00:08:38.240 --rc genhtml_function_coverage=1 00:08:38.240 --rc genhtml_legend=1 00:08:38.240 --rc geninfo_all_blocks=1 00:08:38.240 --rc geninfo_unexecuted_blocks=1 00:08:38.240 00:08:38.240 ' 00:08:38.240 10:13:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:38.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.240 10:13:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58632 00:08:38.240 10:13:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:38.240 10:13:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58632 00:08:38.240 10:13:45 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58632 ']' 00:08:38.240 10:13:45 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.240 10:13:45 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.240 10:13:45 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.240 10:13:45 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.240 10:13:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.499 [2024-11-25 10:13:45.360997] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:38.499 [2024-11-25 10:13:45.361365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58632 ] 00:08:38.499 [2024-11-25 10:13:45.543868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.758 [2024-11-25 10:13:45.668213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.697 10:13:46 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.697 10:13:46 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:39.697 10:13:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:39.956 10:13:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58632 00:08:39.956 10:13:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58632 ']' 00:08:39.956 10:13:46 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58632 00:08:39.956 10:13:46 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:39.956 10:13:46 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.956 10:13:46 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58632 00:08:39.956 killing process with pid 58632 00:08:39.956 10:13:46 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.956 10:13:46 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.956 10:13:46 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58632' 00:08:39.956 10:13:46 alias_rpc -- common/autotest_common.sh@973 -- # kill 58632 00:08:39.956 10:13:46 alias_rpc -- common/autotest_common.sh@978 -- # wait 58632 00:08:42.492 ************************************ 00:08:42.492 END TEST alias_rpc 00:08:42.492 ************************************ 00:08:42.492 00:08:42.492 real 0m4.236s 00:08:42.492 user 0m4.241s 00:08:42.492 sys 0m0.606s 00:08:42.492 10:13:49 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.492 10:13:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.492 10:13:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:42.492 10:13:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:42.492 10:13:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.492 10:13:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.492 10:13:49 -- common/autotest_common.sh@10 -- # set +x 00:08:42.492 ************************************ 00:08:42.492 START TEST spdkcli_tcp 00:08:42.492 ************************************ 00:08:42.492 10:13:49 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:42.492 * Looking for test storage... 00:08:42.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:42.492 10:13:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.492 10:13:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.492 10:13:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.492 10:13:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.492 10:13:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:42.492 10:13:49 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.492 10:13:49 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.492 --rc genhtml_branch_coverage=1 00:08:42.492 --rc genhtml_function_coverage=1 00:08:42.492 --rc genhtml_legend=1 00:08:42.492 --rc geninfo_all_blocks=1 00:08:42.492 --rc geninfo_unexecuted_blocks=1 00:08:42.492 00:08:42.492 ' 00:08:42.492 10:13:49 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.492 --rc genhtml_branch_coverage=1 00:08:42.492 --rc genhtml_function_coverage=1 00:08:42.492 --rc genhtml_legend=1 00:08:42.492 --rc geninfo_all_blocks=1 00:08:42.492 --rc geninfo_unexecuted_blocks=1 00:08:42.492 00:08:42.492 ' 00:08:42.492 10:13:49 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.493 --rc genhtml_branch_coverage=1 00:08:42.493 --rc genhtml_function_coverage=1 00:08:42.493 --rc genhtml_legend=1 00:08:42.493 --rc geninfo_all_blocks=1 00:08:42.493 --rc geninfo_unexecuted_blocks=1 00:08:42.493 00:08:42.493 ' 00:08:42.493 10:13:49 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.493 --rc genhtml_branch_coverage=1 00:08:42.493 --rc genhtml_function_coverage=1 00:08:42.493 --rc genhtml_legend=1 00:08:42.493 --rc geninfo_all_blocks=1 00:08:42.493 --rc geninfo_unexecuted_blocks=1 00:08:42.493 00:08:42.493 ' 00:08:42.493 10:13:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:42.493 10:13:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:42.493 10:13:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:42.493 10:13:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:42.493 10:13:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:42.493 10:13:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:42.493 10:13:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:42.493 10:13:49 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.493 10:13:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.493 10:13:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58742 00:08:42.493 10:13:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:42.493 10:13:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58742 00:08:42.493 10:13:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58742 ']' 00:08:42.493 10:13:49 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.493 10:13:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.493 10:13:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.493 10:13:49 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.493 10:13:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.751 [2024-11-25 10:13:49.680003] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:42.751 [2024-11-25 10:13:49.680132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58742 ] 00:08:43.010 [2024-11-25 10:13:49.863443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.010 [2024-11-25 10:13:49.983321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.010 [2024-11-25 10:13:49.983353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.948 10:13:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.948 10:13:50 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:43.948 10:13:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:43.948 10:13:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58759 00:08:43.948 10:13:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:43.948 [ 00:08:43.948 "bdev_malloc_delete", 00:08:43.948 "bdev_malloc_create", 00:08:43.948 "bdev_null_resize", 00:08:43.948 "bdev_null_delete", 00:08:43.948 "bdev_null_create", 00:08:43.948 "bdev_nvme_cuse_unregister", 00:08:43.948 "bdev_nvme_cuse_register", 00:08:43.948 "bdev_opal_new_user", 00:08:43.948 "bdev_opal_set_lock_state", 00:08:43.948 "bdev_opal_delete", 00:08:43.948 "bdev_opal_get_info", 00:08:43.948 "bdev_opal_create", 00:08:43.948 "bdev_nvme_opal_revert", 00:08:43.948 "bdev_nvme_opal_init", 00:08:43.948 "bdev_nvme_send_cmd", 00:08:43.948 "bdev_nvme_set_keys", 00:08:43.948 "bdev_nvme_get_path_iostat", 00:08:43.948 "bdev_nvme_get_mdns_discovery_info", 00:08:43.948 "bdev_nvme_stop_mdns_discovery", 00:08:43.948 "bdev_nvme_start_mdns_discovery", 00:08:43.948 "bdev_nvme_set_multipath_policy", 00:08:43.948 "bdev_nvme_set_preferred_path", 00:08:43.948 "bdev_nvme_get_io_paths", 00:08:43.948 "bdev_nvme_remove_error_injection", 00:08:43.948 "bdev_nvme_add_error_injection", 00:08:43.948 "bdev_nvme_get_discovery_info", 00:08:43.948 "bdev_nvme_stop_discovery", 00:08:43.948 "bdev_nvme_start_discovery", 00:08:43.948 "bdev_nvme_get_controller_health_info", 00:08:43.948 "bdev_nvme_disable_controller", 00:08:43.948 "bdev_nvme_enable_controller", 00:08:43.948 "bdev_nvme_reset_controller", 00:08:43.948 "bdev_nvme_get_transport_statistics", 00:08:43.948 "bdev_nvme_apply_firmware", 00:08:43.948 "bdev_nvme_detach_controller", 00:08:43.948 "bdev_nvme_get_controllers", 00:08:43.948 "bdev_nvme_attach_controller", 00:08:43.948 "bdev_nvme_set_hotplug", 00:08:43.948 "bdev_nvme_set_options", 00:08:43.948 "bdev_passthru_delete", 00:08:43.948 "bdev_passthru_create", 00:08:43.948 "bdev_lvol_set_parent_bdev", 00:08:43.948 "bdev_lvol_set_parent", 00:08:43.948 "bdev_lvol_check_shallow_copy", 00:08:43.948 "bdev_lvol_start_shallow_copy", 00:08:43.948 "bdev_lvol_grow_lvstore", 00:08:43.948 "bdev_lvol_get_lvols", 00:08:43.948 "bdev_lvol_get_lvstores", 00:08:43.948 "bdev_lvol_delete", 00:08:43.948 "bdev_lvol_set_read_only", 00:08:43.948 "bdev_lvol_resize", 00:08:43.948 "bdev_lvol_decouple_parent", 00:08:43.948 "bdev_lvol_inflate", 00:08:43.948 "bdev_lvol_rename", 00:08:43.948 "bdev_lvol_clone_bdev", 00:08:43.948 "bdev_lvol_clone", 00:08:43.948 "bdev_lvol_snapshot", 00:08:43.948 "bdev_lvol_create", 00:08:43.948 "bdev_lvol_delete_lvstore", 00:08:43.948 "bdev_lvol_rename_lvstore", 00:08:43.948 "bdev_lvol_create_lvstore", 00:08:43.948 "bdev_raid_set_options", 00:08:43.948 "bdev_raid_remove_base_bdev", 00:08:43.948 "bdev_raid_add_base_bdev", 00:08:43.948 "bdev_raid_delete", 00:08:43.948 "bdev_raid_create", 00:08:43.948 "bdev_raid_get_bdevs", 00:08:43.948 "bdev_error_inject_error", 00:08:43.948 "bdev_error_delete", 00:08:43.948 "bdev_error_create", 00:08:43.948 "bdev_split_delete", 00:08:43.948 "bdev_split_create", 00:08:43.948 "bdev_delay_delete", 00:08:43.948 "bdev_delay_create", 00:08:43.948 "bdev_delay_update_latency", 00:08:43.948 "bdev_zone_block_delete", 00:08:43.948 "bdev_zone_block_create", 00:08:43.948 "blobfs_create", 00:08:43.948 "blobfs_detect", 00:08:43.948 "blobfs_set_cache_size", 00:08:43.948 "bdev_xnvme_delete", 00:08:43.948 "bdev_xnvme_create", 00:08:43.948 "bdev_aio_delete", 00:08:43.948 "bdev_aio_rescan", 00:08:43.948 "bdev_aio_create", 00:08:43.948 "bdev_ftl_set_property", 00:08:43.948 "bdev_ftl_get_properties", 00:08:43.948 "bdev_ftl_get_stats", 00:08:43.948 "bdev_ftl_unmap", 00:08:43.948 "bdev_ftl_unload", 00:08:43.948 "bdev_ftl_delete", 00:08:43.948 "bdev_ftl_load", 00:08:43.948 "bdev_ftl_create", 00:08:43.948 "bdev_virtio_attach_controller", 00:08:43.948 "bdev_virtio_scsi_get_devices", 00:08:43.948 "bdev_virtio_detach_controller", 00:08:43.948 "bdev_virtio_blk_set_hotplug", 00:08:43.948 "bdev_iscsi_delete", 00:08:43.948 "bdev_iscsi_create", 00:08:43.948 "bdev_iscsi_set_options", 00:08:43.948 "accel_error_inject_error", 00:08:43.948 "ioat_scan_accel_module", 00:08:43.948 "dsa_scan_accel_module", 00:08:43.948 "iaa_scan_accel_module", 00:08:43.948 "keyring_file_remove_key", 00:08:43.948 "keyring_file_add_key", 00:08:43.948 "keyring_linux_set_options", 00:08:43.948 "fsdev_aio_delete", 00:08:43.948 "fsdev_aio_create", 00:08:43.948 "iscsi_get_histogram", 00:08:43.948 "iscsi_enable_histogram", 00:08:43.948 "iscsi_set_options", 00:08:43.948 "iscsi_get_auth_groups", 00:08:43.948 "iscsi_auth_group_remove_secret", 00:08:43.948 "iscsi_auth_group_add_secret", 00:08:43.948 "iscsi_delete_auth_group", 00:08:43.948 "iscsi_create_auth_group", 00:08:43.948 "iscsi_set_discovery_auth", 00:08:43.948 "iscsi_get_options", 00:08:43.948 "iscsi_target_node_request_logout", 00:08:43.948 "iscsi_target_node_set_redirect", 00:08:43.948 "iscsi_target_node_set_auth", 00:08:43.948 "iscsi_target_node_add_lun", 00:08:43.948 "iscsi_get_stats", 00:08:43.948 "iscsi_get_connections", 00:08:43.948 "iscsi_portal_group_set_auth", 00:08:43.948 "iscsi_start_portal_group", 00:08:43.948 "iscsi_delete_portal_group", 00:08:43.948 "iscsi_create_portal_group", 00:08:43.948 "iscsi_get_portal_groups", 00:08:43.948 "iscsi_delete_target_node", 00:08:43.948 "iscsi_target_node_remove_pg_ig_maps", 00:08:43.948 "iscsi_target_node_add_pg_ig_maps", 00:08:43.948 "iscsi_create_target_node", 00:08:43.948 "iscsi_get_target_nodes", 00:08:43.948 "iscsi_delete_initiator_group", 00:08:43.948 "iscsi_initiator_group_remove_initiators", 00:08:43.948 "iscsi_initiator_group_add_initiators", 00:08:43.948 "iscsi_create_initiator_group", 00:08:43.948 "iscsi_get_initiator_groups", 00:08:43.948 "nvmf_set_crdt", 00:08:43.948 "nvmf_set_config", 00:08:43.948 "nvmf_set_max_subsystems", 00:08:43.948 "nvmf_stop_mdns_prr", 00:08:43.948 "nvmf_publish_mdns_prr", 00:08:43.948 "nvmf_subsystem_get_listeners", 00:08:43.948 "nvmf_subsystem_get_qpairs", 00:08:43.948 "nvmf_subsystem_get_controllers", 00:08:43.948 "nvmf_get_stats", 00:08:43.948 "nvmf_get_transports", 00:08:43.948 "nvmf_create_transport", 00:08:43.948 "nvmf_get_targets", 00:08:43.948 "nvmf_delete_target", 00:08:43.948 "nvmf_create_target", 00:08:43.948 "nvmf_subsystem_allow_any_host", 00:08:43.948 "nvmf_subsystem_set_keys", 00:08:43.949 "nvmf_subsystem_remove_host", 00:08:43.949 "nvmf_subsystem_add_host", 00:08:43.949 "nvmf_ns_remove_host", 00:08:43.949 "nvmf_ns_add_host", 00:08:43.949 "nvmf_subsystem_remove_ns", 00:08:43.949 "nvmf_subsystem_set_ns_ana_group", 00:08:43.949 "nvmf_subsystem_add_ns", 00:08:43.949 "nvmf_subsystem_listener_set_ana_state", 00:08:43.949 "nvmf_discovery_get_referrals", 00:08:43.949 "nvmf_discovery_remove_referral", 00:08:43.949 "nvmf_discovery_add_referral", 00:08:43.949 "nvmf_subsystem_remove_listener", 00:08:43.949 "nvmf_subsystem_add_listener", 00:08:43.949 "nvmf_delete_subsystem", 00:08:43.949 "nvmf_create_subsystem", 00:08:43.949 "nvmf_get_subsystems", 00:08:43.949 "env_dpdk_get_mem_stats", 00:08:43.949 "nbd_get_disks", 00:08:43.949 "nbd_stop_disk", 00:08:43.949 "nbd_start_disk", 00:08:43.949 "ublk_recover_disk", 00:08:43.949 "ublk_get_disks", 00:08:43.949 "ublk_stop_disk", 00:08:43.949 "ublk_start_disk", 00:08:43.949 "ublk_destroy_target", 00:08:43.949 "ublk_create_target", 00:08:43.949 "virtio_blk_create_transport", 00:08:43.949 "virtio_blk_get_transports", 00:08:43.949 "vhost_controller_set_coalescing", 00:08:43.949 "vhost_get_controllers", 00:08:43.949 "vhost_delete_controller", 00:08:43.949 "vhost_create_blk_controller", 00:08:43.949 "vhost_scsi_controller_remove_target", 00:08:43.949 "vhost_scsi_controller_add_target", 00:08:43.949 "vhost_start_scsi_controller", 00:08:43.949 "vhost_create_scsi_controller", 00:08:43.949 "thread_set_cpumask", 00:08:43.949 "scheduler_set_options", 00:08:43.949 "framework_get_governor", 00:08:43.949 "framework_get_scheduler", 00:08:43.949 "framework_set_scheduler", 00:08:43.949 "framework_get_reactors", 00:08:43.949 "thread_get_io_channels", 00:08:43.949 "thread_get_pollers", 00:08:43.949 "thread_get_stats", 00:08:43.949 "framework_monitor_context_switch", 00:08:43.949 "spdk_kill_instance", 00:08:43.949 "log_enable_timestamps", 00:08:43.949 "log_get_flags", 00:08:43.949 "log_clear_flag", 00:08:43.949 "log_set_flag", 00:08:43.949 "log_get_level", 00:08:43.949 "log_set_level", 00:08:43.949 "log_get_print_level", 00:08:43.949 "log_set_print_level", 00:08:43.949 "framework_enable_cpumask_locks", 00:08:43.949 "framework_disable_cpumask_locks", 00:08:43.949 "framework_wait_init", 00:08:43.949 "framework_start_init", 00:08:43.949 "scsi_get_devices", 00:08:43.949 "bdev_get_histogram", 00:08:43.949 "bdev_enable_histogram", 00:08:43.949 "bdev_set_qos_limit", 00:08:43.949 "bdev_set_qd_sampling_period", 00:08:43.949 "bdev_get_bdevs", 00:08:43.949 "bdev_reset_iostat", 00:08:43.949 "bdev_get_iostat", 00:08:43.949 "bdev_examine", 00:08:43.949 "bdev_wait_for_examine", 00:08:43.949 "bdev_set_options", 00:08:43.949 "accel_get_stats", 00:08:43.949 "accel_set_options", 00:08:43.949 "accel_set_driver", 00:08:43.949 "accel_crypto_key_destroy", 00:08:43.949 "accel_crypto_keys_get", 00:08:43.949 "accel_crypto_key_create", 00:08:43.949 "accel_assign_opc", 00:08:43.949 "accel_get_module_info", 00:08:43.949 "accel_get_opc_assignments", 00:08:43.949 "vmd_rescan", 00:08:43.949 "vmd_remove_device", 00:08:43.949 "vmd_enable", 00:08:43.949 "sock_get_default_impl", 00:08:43.949 "sock_set_default_impl", 00:08:43.949 "sock_impl_set_options", 00:08:43.949 "sock_impl_get_options", 00:08:43.949 "iobuf_get_stats", 00:08:43.949 "iobuf_set_options", 00:08:43.949 "keyring_get_keys", 00:08:43.949 "framework_get_pci_devices", 00:08:43.949 "framework_get_config", 00:08:43.949 "framework_get_subsystems", 00:08:43.949 "fsdev_set_opts", 00:08:43.949 "fsdev_get_opts", 00:08:43.949 "trace_get_info", 00:08:43.949 "trace_get_tpoint_group_mask", 00:08:43.949 "trace_disable_tpoint_group", 00:08:43.949 "trace_enable_tpoint_group", 00:08:43.949 "trace_clear_tpoint_mask", 00:08:43.949 "trace_set_tpoint_mask", 00:08:43.949 "notify_get_notifications", 00:08:43.949 "notify_get_types", 00:08:43.949 "spdk_get_version", 00:08:43.949 "rpc_get_methods" 00:08:43.949 ] 00:08:44.209 10:13:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.209 10:13:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:44.209 10:13:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58742 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58742 ']' 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58742 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58742 00:08:44.209 killing process with pid 58742 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58742' 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58742 00:08:44.209 10:13:51 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58742 00:08:46.747 ************************************ 00:08:46.747 END TEST spdkcli_tcp 00:08:46.747 ************************************ 00:08:46.747 00:08:46.747 real 0m4.255s 00:08:46.747 user 0m7.517s 00:08:46.747 sys 0m0.687s 00:08:46.747 10:13:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.747 10:13:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:46.747 10:13:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:46.747 10:13:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.747 10:13:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.747 10:13:53 -- common/autotest_common.sh@10 -- # set +x 00:08:46.747 ************************************ 00:08:46.747 START TEST dpdk_mem_utility 00:08:46.747 ************************************ 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:46.747 * Looking for test storage... 00:08:46.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:46.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.747 10:13:53 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.747 --rc genhtml_branch_coverage=1 00:08:46.747 --rc genhtml_function_coverage=1 00:08:46.747 --rc genhtml_legend=1 00:08:46.747 --rc geninfo_all_blocks=1 00:08:46.747 --rc geninfo_unexecuted_blocks=1 00:08:46.747 00:08:46.747 ' 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.747 --rc genhtml_branch_coverage=1 00:08:46.747 --rc genhtml_function_coverage=1 00:08:46.747 --rc genhtml_legend=1 00:08:46.747 --rc geninfo_all_blocks=1 00:08:46.747 --rc geninfo_unexecuted_blocks=1 00:08:46.747 00:08:46.747 ' 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.747 --rc genhtml_branch_coverage=1 00:08:46.747 --rc genhtml_function_coverage=1 00:08:46.747 --rc genhtml_legend=1 00:08:46.747 --rc geninfo_all_blocks=1 00:08:46.747 --rc geninfo_unexecuted_blocks=1 00:08:46.747 00:08:46.747 ' 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.747 --rc genhtml_branch_coverage=1 00:08:46.747 --rc genhtml_function_coverage=1 00:08:46.747 --rc genhtml_legend=1 00:08:46.747 --rc geninfo_all_blocks=1 00:08:46.747 --rc geninfo_unexecuted_blocks=1 00:08:46.747 00:08:46.747 ' 00:08:46.747 10:13:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:46.747 10:13:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58864 00:08:46.747 10:13:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58864 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58864 ']' 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.747 10:13:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:46.747 10:13:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:47.006 [2024-11-25 10:13:53.952827] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:47.006 [2024-11-25 10:13:53.953171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58864 ] 00:08:47.265 [2024-11-25 10:13:54.135526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.265 [2024-11-25 10:13:54.251642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.203 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.203 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:48.203 10:13:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:48.203 10:13:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:48.203 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.203 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:48.203 { 00:08:48.203 "filename": "/tmp/spdk_mem_dump.txt" 00:08:48.203 } 00:08:48.203 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.203 10:13:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:48.203 DPDK memory size 816.000000 MiB in 1 heap(s) 00:08:48.203 1 heaps totaling size 816.000000 MiB 00:08:48.203 size: 816.000000 MiB heap id: 0 00:08:48.203 end heaps---------- 00:08:48.203 9 mempools totaling size 595.772034 MiB 00:08:48.203 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:48.203 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:48.203 size: 92.545471 MiB name: bdev_io_58864 00:08:48.203 size: 50.003479 MiB name: msgpool_58864 00:08:48.203 size: 36.509338 MiB name: fsdev_io_58864 00:08:48.203 size: 21.763794 MiB name: PDU_Pool 00:08:48.203 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:48.203 size: 4.133484 MiB name: evtpool_58864 00:08:48.203 size: 0.026123 MiB name: Session_Pool 00:08:48.203 end mempools------- 00:08:48.203 6 memzones totaling size 4.142822 MiB 00:08:48.203 size: 1.000366 MiB name: RG_ring_0_58864 00:08:48.203 size: 1.000366 MiB name: RG_ring_1_58864 00:08:48.203 size: 1.000366 MiB name: RG_ring_4_58864 00:08:48.203 size: 1.000366 MiB name: RG_ring_5_58864 00:08:48.203 size: 0.125366 MiB name: RG_ring_2_58864 00:08:48.203 size: 0.015991 MiB name: RG_ring_3_58864 00:08:48.203 end memzones------- 00:08:48.203 10:13:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:48.203 heap id: 0 total size: 816.000000 MiB number of busy elements: 309 number of free elements: 18 00:08:48.203 list of free elements. size: 16.792847 MiB 00:08:48.203 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:48.203 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:48.203 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:48.203 element at address: 0x200018d00040 with size: 0.999939 MiB 00:08:48.203 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:48.203 element at address: 0x200019200000 with size: 0.999084 MiB 00:08:48.203 element at address: 0x200031e00000 with size: 0.994324 MiB 00:08:48.203 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:48.203 element at address: 0x200018a00000 with size: 0.959656 MiB 00:08:48.203 element at address: 0x200019500040 with size: 0.936401 MiB 00:08:48.203 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:48.203 element at address: 0x20001ac00000 with size: 0.563171 MiB 00:08:48.203 element at address: 0x200000c00000 with size: 0.490173 MiB 00:08:48.203 element at address: 0x200018e00000 with size: 0.487976 MiB 00:08:48.204 element at address: 0x200019600000 with size: 0.485413 MiB 00:08:48.204 element at address: 0x200012c00000 with size: 0.443481 MiB 00:08:48.204 element at address: 0x200028000000 with size: 0.390442 MiB 00:08:48.204 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:48.204 list of standard malloc elements. size: 199.286255 MiB 00:08:48.204 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:48.204 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:48.204 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:08:48.204 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:48.204 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:48.204 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:48.204 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:08:48.204 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:48.204 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:48.204 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:08:48.204 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:48.204 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012c71880 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012c71980 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012c72080 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012c72180 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:08:48.204 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:08:48.205 element at address: 0x200028063f40 with size: 0.000244 MiB 00:08:48.205 element at address: 0x200028064040 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806af80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806b080 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806b180 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806b280 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806b380 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806b480 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806b580 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806b680 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806b780 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806b880 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806b980 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806be80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806c080 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806c180 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806c280 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806c380 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806c480 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806c580 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806c680 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806c780 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806c880 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806c980 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806d080 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806d180 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806d280 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806d380 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806d480 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806d580 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806d680 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806d780 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806d880 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806d980 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806da80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806db80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806de80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806df80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806e080 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806e180 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806e280 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806e380 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806e480 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806e580 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806e680 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806e780 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806e880 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806e980 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806f080 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806f180 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806f280 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806f380 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806f480 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806f580 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806f680 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806f780 with size: 0.000244 MiB 00:08:48.205 element at address: 0x20002806f880 with size: 0.000244 MiB 00:08:48.206 element at address: 0x20002806f980 with size: 0.000244 MiB 00:08:48.206 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:08:48.206 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:08:48.206 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:08:48.206 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:08:48.206 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:08:48.206 list of memzone associated elements. size: 599.920898 MiB 00:08:48.206 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:08:48.206 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:48.206 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:08:48.206 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:48.206 element at address: 0x200012df4740 with size: 92.045105 MiB 00:08:48.206 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58864_0 00:08:48.206 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:48.206 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58864_0 00:08:48.206 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:48.206 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58864_0 00:08:48.206 element at address: 0x2000197be900 with size: 20.255615 MiB 00:08:48.206 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:48.206 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:08:48.206 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:48.206 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:48.206 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58864_0 00:08:48.206 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:48.206 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58864 00:08:48.206 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:48.206 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58864 00:08:48.206 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:48.206 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:48.206 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:08:48.206 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:48.206 element at address: 0x200018afde00 with size: 1.008179 MiB 00:08:48.206 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:48.206 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:08:48.206 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:48.206 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:48.206 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58864 00:08:48.206 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:48.206 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58864 00:08:48.206 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:08:48.206 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58864 00:08:48.206 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:08:48.206 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58864 00:08:48.206 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:48.206 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58864 00:08:48.206 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:48.206 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58864 00:08:48.206 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:08:48.206 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:48.206 element at address: 0x200012c72280 with size: 0.500549 MiB 00:08:48.206 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:48.206 element at address: 0x20001967c440 with size: 0.250549 MiB 00:08:48.206 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:48.206 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:48.206 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58864 00:08:48.206 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:48.206 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58864 00:08:48.206 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:08:48.206 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:48.206 element at address: 0x200028064140 with size: 0.023804 MiB 00:08:48.206 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:48.206 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:48.206 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58864 00:08:48.206 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:08:48.206 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:48.206 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:48.206 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58864 00:08:48.206 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:48.206 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58864 00:08:48.206 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:48.206 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58864 00:08:48.206 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:08:48.206 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:48.206 10:13:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:48.206 10:13:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58864 00:08:48.206 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58864 ']' 00:08:48.206 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58864 00:08:48.206 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:48.206 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.206 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58864 00:08:48.206 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.206 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.206 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58864' 00:08:48.206 killing process with pid 58864 00:08:48.206 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58864 00:08:48.466 10:13:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58864 00:08:51.024 00:08:51.024 real 0m4.074s 00:08:51.024 user 0m3.936s 00:08:51.024 sys 0m0.618s 00:08:51.024 ************************************ 00:08:51.024 END TEST dpdk_mem_utility 00:08:51.024 ************************************ 00:08:51.024 10:13:57 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.024 10:13:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:51.024 10:13:57 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:51.024 10:13:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.024 10:13:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.024 10:13:57 -- common/autotest_common.sh@10 -- # set +x 00:08:51.024 ************************************ 00:08:51.024 START TEST event 00:08:51.024 ************************************ 00:08:51.024 10:13:57 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:51.024 * Looking for test storage... 00:08:51.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:51.024 10:13:57 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:51.024 10:13:57 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:51.024 10:13:57 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:51.024 10:13:58 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:51.024 10:13:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.024 10:13:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.024 10:13:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.024 10:13:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.024 10:13:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.024 10:13:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.024 10:13:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.024 10:13:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.024 10:13:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.024 10:13:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.024 10:13:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.024 10:13:58 event -- scripts/common.sh@344 -- # case "$op" in 00:08:51.024 10:13:58 event -- scripts/common.sh@345 -- # : 1 00:08:51.024 10:13:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.024 10:13:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.024 10:13:58 event -- scripts/common.sh@365 -- # decimal 1 00:08:51.024 10:13:58 event -- scripts/common.sh@353 -- # local d=1 00:08:51.024 10:13:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.024 10:13:58 event -- scripts/common.sh@355 -- # echo 1 00:08:51.024 10:13:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.024 10:13:58 event -- scripts/common.sh@366 -- # decimal 2 00:08:51.024 10:13:58 event -- scripts/common.sh@353 -- # local d=2 00:08:51.024 10:13:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.024 10:13:58 event -- scripts/common.sh@355 -- # echo 2 00:08:51.024 10:13:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.024 10:13:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.024 10:13:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.024 10:13:58 event -- scripts/common.sh@368 -- # return 0 00:08:51.024 10:13:58 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.024 10:13:58 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:51.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.024 --rc genhtml_branch_coverage=1 00:08:51.024 --rc genhtml_function_coverage=1 00:08:51.024 --rc genhtml_legend=1 00:08:51.024 --rc geninfo_all_blocks=1 00:08:51.025 --rc geninfo_unexecuted_blocks=1 00:08:51.025 00:08:51.025 ' 00:08:51.025 10:13:58 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:51.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.025 --rc genhtml_branch_coverage=1 00:08:51.025 --rc genhtml_function_coverage=1 00:08:51.025 --rc genhtml_legend=1 00:08:51.025 --rc geninfo_all_blocks=1 00:08:51.025 --rc geninfo_unexecuted_blocks=1 00:08:51.025 00:08:51.025 ' 00:08:51.025 10:13:58 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:51.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.025 --rc genhtml_branch_coverage=1 00:08:51.025 --rc genhtml_function_coverage=1 00:08:51.025 --rc genhtml_legend=1 00:08:51.025 --rc geninfo_all_blocks=1 00:08:51.025 --rc geninfo_unexecuted_blocks=1 00:08:51.025 00:08:51.025 ' 00:08:51.025 10:13:58 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:51.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.025 --rc genhtml_branch_coverage=1 00:08:51.025 --rc genhtml_function_coverage=1 00:08:51.025 --rc genhtml_legend=1 00:08:51.025 --rc geninfo_all_blocks=1 00:08:51.025 --rc geninfo_unexecuted_blocks=1 00:08:51.025 00:08:51.025 ' 00:08:51.025 10:13:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:51.025 10:13:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:51.025 10:13:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:51.025 10:13:58 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:51.025 10:13:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.025 10:13:58 event -- common/autotest_common.sh@10 -- # set +x 00:08:51.025 ************************************ 00:08:51.025 START TEST event_perf 00:08:51.025 ************************************ 00:08:51.025 10:13:58 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:51.025 Running I/O for 1 seconds...[2024-11-25 10:13:58.083987] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:51.025 [2024-11-25 10:13:58.084273] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58972 ] 00:08:51.284 [2024-11-25 10:13:58.286160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.543 Running I/O for 1 seconds...[2024-11-25 10:13:58.408393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.543 [2024-11-25 10:13:58.408612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.543 [2024-11-25 10:13:58.408611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.543 [2024-11-25 10:13:58.408619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.922 00:08:52.922 lcore 0: 201478 00:08:52.922 lcore 1: 201477 00:08:52.922 lcore 2: 201476 00:08:52.922 lcore 3: 201477 00:08:52.922 done. 00:08:52.922 00:08:52.922 real 0m1.619s 00:08:52.922 user 0m4.355s 00:08:52.922 sys 0m0.139s 00:08:52.922 10:13:59 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.922 10:13:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:52.922 ************************************ 00:08:52.922 END TEST event_perf 00:08:52.922 ************************************ 00:08:52.922 10:13:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:52.922 10:13:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:52.922 10:13:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.922 10:13:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:52.922 ************************************ 00:08:52.922 START TEST event_reactor 00:08:52.922 ************************************ 00:08:52.922 10:13:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:52.922 [2024-11-25 10:13:59.780174] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:52.922 [2024-11-25 10:13:59.780469] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59012 ] 00:08:52.922 [2024-11-25 10:13:59.963839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.180 [2024-11-25 10:14:00.078938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.555 test_start 00:08:54.555 oneshot 00:08:54.555 tick 100 00:08:54.555 tick 100 00:08:54.555 tick 250 00:08:54.555 tick 100 00:08:54.555 tick 100 00:08:54.555 tick 250 00:08:54.555 tick 100 00:08:54.555 tick 500 00:08:54.555 tick 100 00:08:54.555 tick 100 00:08:54.555 tick 250 00:08:54.555 tick 100 00:08:54.555 tick 100 00:08:54.555 test_end 00:08:54.555 00:08:54.555 real 0m1.580s 00:08:54.555 user 0m1.365s 00:08:54.555 sys 0m0.106s 00:08:54.555 ************************************ 00:08:54.555 END TEST event_reactor 00:08:54.555 ************************************ 00:08:54.555 10:14:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.555 10:14:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:54.555 10:14:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:54.555 10:14:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:54.555 10:14:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.555 10:14:01 event -- common/autotest_common.sh@10 -- # set +x 00:08:54.555 ************************************ 00:08:54.555 START TEST event_reactor_perf 00:08:54.555 ************************************ 00:08:54.555 10:14:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:54.555 [2024-11-25 10:14:01.435569] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:54.555 [2024-11-25 10:14:01.435883] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59054 ] 00:08:54.555 [2024-11-25 10:14:01.620800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.815 [2024-11-25 10:14:01.737543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.851 test_start 00:08:55.851 test_end 00:08:55.851 Performance: 378334 events per second 00:08:56.111 ************************************ 00:08:56.111 END TEST event_reactor_perf 00:08:56.111 ************************************ 00:08:56.111 00:08:56.111 real 0m1.583s 00:08:56.111 user 0m1.362s 00:08:56.111 sys 0m0.112s 00:08:56.111 10:14:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.111 10:14:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:56.111 10:14:03 event -- event/event.sh@49 -- # uname -s 00:08:56.111 10:14:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:56.111 10:14:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:56.111 10:14:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.111 10:14:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.111 10:14:03 event -- common/autotest_common.sh@10 -- # set +x 00:08:56.111 ************************************ 00:08:56.111 START TEST event_scheduler 00:08:56.111 ************************************ 00:08:56.111 10:14:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:56.111 * Looking for test storage... 00:08:56.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:56.111 10:14:03 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.111 10:14:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.111 10:14:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.370 10:14:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.370 10:14:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:56.370 10:14:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.370 10:14:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.370 --rc genhtml_branch_coverage=1 00:08:56.370 --rc genhtml_function_coverage=1 00:08:56.370 --rc genhtml_legend=1 00:08:56.370 --rc geninfo_all_blocks=1 00:08:56.370 --rc geninfo_unexecuted_blocks=1 00:08:56.370 00:08:56.370 ' 00:08:56.370 10:14:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.370 --rc genhtml_branch_coverage=1 00:08:56.370 --rc genhtml_function_coverage=1 00:08:56.370 --rc genhtml_legend=1 00:08:56.370 --rc geninfo_all_blocks=1 00:08:56.370 --rc geninfo_unexecuted_blocks=1 00:08:56.370 00:08:56.370 ' 00:08:56.370 10:14:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.370 --rc genhtml_branch_coverage=1 00:08:56.370 --rc genhtml_function_coverage=1 00:08:56.371 --rc genhtml_legend=1 00:08:56.371 --rc geninfo_all_blocks=1 00:08:56.371 --rc geninfo_unexecuted_blocks=1 00:08:56.371 00:08:56.371 ' 00:08:56.371 10:14:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.371 --rc genhtml_branch_coverage=1 00:08:56.371 --rc genhtml_function_coverage=1 00:08:56.371 --rc genhtml_legend=1 00:08:56.371 --rc geninfo_all_blocks=1 00:08:56.371 --rc geninfo_unexecuted_blocks=1 00:08:56.371 00:08:56.371 ' 00:08:56.371 10:14:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:56.371 10:14:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59124 00:08:56.371 10:14:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:56.371 10:14:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:56.371 10:14:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59124 00:08:56.371 10:14:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59124 ']' 00:08:56.371 10:14:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.371 10:14:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.371 10:14:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.371 10:14:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.371 10:14:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:56.371 [2024-11-25 10:14:03.380531] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:08:56.371 [2024-11-25 10:14:03.380937] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59124 ] 00:08:56.629 [2024-11-25 10:14:03.565943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.629 [2024-11-25 10:14:03.695331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.629 [2024-11-25 10:14:03.695555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.629 [2024-11-25 10:14:03.695717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.629 [2024-11-25 10:14:03.695762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.197 10:14:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.197 10:14:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:57.197 10:14:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:57.197 10:14:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.197 10:14:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:57.197 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:57.197 POWER: Cannot set governor of lcore 0 to userspace 00:08:57.197 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:57.197 POWER: Cannot set governor of lcore 0 to performance 00:08:57.197 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:57.197 POWER: Cannot set governor of lcore 0 to userspace 00:08:57.197 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:57.197 POWER: Cannot set governor of lcore 0 to userspace 00:08:57.197 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:57.197 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:57.197 POWER: Unable to set Power Management Environment for lcore 0 00:08:57.197 [2024-11-25 10:14:04.233044] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:57.197 [2024-11-25 10:14:04.233074] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:57.197 [2024-11-25 10:14:04.233087] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:57.197 [2024-11-25 10:14:04.233119] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:57.197 [2024-11-25 10:14:04.233130] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:57.197 [2024-11-25 10:14:04.233146] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:57.197 10:14:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.197 10:14:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:57.197 10:14:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.197 10:14:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 [2024-11-25 10:14:04.573803] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:57.766 10:14:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:57.766 10:14:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.766 10:14:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 ************************************ 00:08:57.766 START TEST scheduler_create_thread 00:08:57.766 ************************************ 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 2 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 3 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 4 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 5 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 6 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 7 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 8 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 9 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 10 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.766 10:14:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:59.141 10:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.141 10:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:59.141 10:14:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:59.141 10:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.141 10:14:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:00.522 ************************************ 00:09:00.522 END TEST scheduler_create_thread 00:09:00.522 ************************************ 00:09:00.522 10:14:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.522 00:09:00.522 real 0m2.615s 00:09:00.522 user 0m0.027s 00:09:00.522 sys 0m0.008s 00:09:00.522 10:14:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.522 10:14:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:00.522 10:14:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:00.522 10:14:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59124 00:09:00.522 10:14:07 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59124 ']' 00:09:00.522 10:14:07 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59124 00:09:00.522 10:14:07 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:00.522 10:14:07 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.522 10:14:07 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59124 00:09:00.522 killing process with pid 59124 00:09:00.522 10:14:07 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:00.522 10:14:07 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:00.522 10:14:07 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59124' 00:09:00.522 10:14:07 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59124 00:09:00.522 10:14:07 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59124 00:09:00.781 [2024-11-25 10:14:07.683204] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:02.177 ************************************ 00:09:02.177 END TEST event_scheduler 00:09:02.177 ************************************ 00:09:02.177 00:09:02.177 real 0m5.783s 00:09:02.177 user 0m9.750s 00:09:02.177 sys 0m0.559s 00:09:02.177 10:14:08 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.177 10:14:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:02.177 10:14:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:02.177 10:14:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:02.177 10:14:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.177 10:14:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.177 10:14:08 event -- common/autotest_common.sh@10 -- # set +x 00:09:02.177 ************************************ 00:09:02.177 START TEST app_repeat 00:09:02.177 ************************************ 00:09:02.177 10:14:08 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59236 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:02.177 Process app_repeat pid: 59236 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59236' 00:09:02.177 spdk_app_start Round 0 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:02.177 10:14:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59236 /var/tmp/spdk-nbd.sock 00:09:02.177 10:14:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59236 ']' 00:09:02.177 10:14:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:02.177 10:14:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:02.177 10:14:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:02.177 10:14:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.177 10:14:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:02.177 [2024-11-25 10:14:08.981016] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:09:02.177 [2024-11-25 10:14:08.981145] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59236 ] 00:09:02.177 [2024-11-25 10:14:09.164562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.462 [2024-11-25 10:14:09.284090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.462 [2024-11-25 10:14:09.284123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.029 10:14:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.029 10:14:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:03.029 10:14:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:03.029 Malloc0 00:09:03.288 10:14:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:03.546 Malloc1 00:09:03.546 10:14:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:03.546 10:14:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:03.546 /dev/nbd0 00:09:03.803 10:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:03.803 10:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:03.803 10:14:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:03.803 10:14:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:03.803 10:14:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:03.803 10:14:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:03.803 10:14:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:03.803 10:14:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:03.804 10:14:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:03.804 10:14:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:03.804 10:14:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:03.804 1+0 records in 00:09:03.804 1+0 records out 00:09:03.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395706 s, 10.4 MB/s 00:09:03.804 10:14:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:03.804 10:14:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:03.804 10:14:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:03.804 10:14:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:03.804 10:14:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:03.804 10:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:03.804 10:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:03.804 10:14:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:03.804 /dev/nbd1 00:09:04.061 10:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:04.061 10:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:04.061 1+0 records in 00:09:04.061 1+0 records out 00:09:04.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276226 s, 14.8 MB/s 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:04.061 10:14:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:04.061 10:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.061 10:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:04.061 10:14:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:04.061 10:14:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.061 10:14:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:04.061 10:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:04.061 { 00:09:04.061 "nbd_device": "/dev/nbd0", 00:09:04.061 "bdev_name": "Malloc0" 00:09:04.061 }, 00:09:04.061 { 00:09:04.061 "nbd_device": "/dev/nbd1", 00:09:04.061 "bdev_name": "Malloc1" 00:09:04.061 } 00:09:04.061 ]' 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:04.320 { 00:09:04.320 "nbd_device": "/dev/nbd0", 00:09:04.320 "bdev_name": "Malloc0" 00:09:04.320 }, 00:09:04.320 { 00:09:04.320 "nbd_device": "/dev/nbd1", 00:09:04.320 "bdev_name": "Malloc1" 00:09:04.320 } 00:09:04.320 ]' 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:04.320 /dev/nbd1' 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:04.320 /dev/nbd1' 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:04.320 256+0 records in 00:09:04.320 256+0 records out 00:09:04.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126545 s, 82.9 MB/s 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:04.320 256+0 records in 00:09:04.320 256+0 records out 00:09:04.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284227 s, 36.9 MB/s 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:04.320 256+0 records in 00:09:04.320 256+0 records out 00:09:04.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316367 s, 33.1 MB/s 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.320 10:14:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:04.579 10:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:04.579 10:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:04.579 10:14:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:04.579 10:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.579 10:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.579 10:14:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.579 10:14:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:04.579 10:14:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.579 10:14:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.579 10:14:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.837 10:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:05.095 10:14:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:05.095 10:14:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:05.353 10:14:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:06.728 [2024-11-25 10:14:13.589275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:06.728 [2024-11-25 10:14:13.699397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.728 [2024-11-25 10:14:13.699398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.986 [2024-11-25 10:14:13.892014] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:06.986 [2024-11-25 10:14:13.892073] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:08.363 spdk_app_start Round 1 00:09:08.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:08.363 10:14:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:08.363 10:14:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:08.363 10:14:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59236 /var/tmp/spdk-nbd.sock 00:09:08.363 10:14:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59236 ']' 00:09:08.363 10:14:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:08.363 10:14:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.363 10:14:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:08.363 10:14:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.363 10:14:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:08.622 10:14:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.622 10:14:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:08.622 10:14:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:09.190 Malloc0 00:09:09.190 10:14:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:09.190 Malloc1 00:09:09.462 10:14:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:09.462 10:14:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:09.462 /dev/nbd0 00:09:09.721 10:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:09.721 10:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:09.721 1+0 records in 00:09:09.721 1+0 records out 00:09:09.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374721 s, 10.9 MB/s 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:09.721 10:14:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:09.721 10:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:09.721 10:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:09.721 10:14:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:09.980 /dev/nbd1 00:09:09.980 10:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:09.980 10:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:09.980 1+0 records in 00:09:09.980 1+0 records out 00:09:09.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434 s, 9.4 MB/s 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:09.980 10:14:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:09.980 10:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:09.980 10:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:09.980 10:14:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:09.980 10:14:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.980 10:14:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:10.239 { 00:09:10.239 "nbd_device": "/dev/nbd0", 00:09:10.239 "bdev_name": "Malloc0" 00:09:10.239 }, 00:09:10.239 { 00:09:10.239 "nbd_device": "/dev/nbd1", 00:09:10.239 "bdev_name": "Malloc1" 00:09:10.239 } 00:09:10.239 ]' 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:10.239 { 00:09:10.239 "nbd_device": "/dev/nbd0", 00:09:10.239 "bdev_name": "Malloc0" 00:09:10.239 }, 00:09:10.239 { 00:09:10.239 "nbd_device": "/dev/nbd1", 00:09:10.239 "bdev_name": "Malloc1" 00:09:10.239 } 00:09:10.239 ]' 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:10.239 /dev/nbd1' 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:10.239 /dev/nbd1' 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:10.239 256+0 records in 00:09:10.239 256+0 records out 00:09:10.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183186 s, 57.2 MB/s 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:10.239 256+0 records in 00:09:10.239 256+0 records out 00:09:10.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312556 s, 33.5 MB/s 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:10.239 256+0 records in 00:09:10.239 256+0 records out 00:09:10.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316197 s, 33.2 MB/s 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:10.239 10:14:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.240 10:14:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:10.498 10:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:10.498 10:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:10.498 10:14:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:10.498 10:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.498 10:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.498 10:14:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:10.499 10:14:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:10.499 10:14:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.499 10:14:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.499 10:14:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.066 10:14:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:11.066 10:14:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:11.066 10:14:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:11.632 10:14:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:13.009 [2024-11-25 10:14:19.735539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:13.009 [2024-11-25 10:14:19.849829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.009 [2024-11-25 10:14:19.849853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.009 [2024-11-25 10:14:20.050735] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:13.009 [2024-11-25 10:14:20.050827] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:14.926 10:14:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:14.926 spdk_app_start Round 2 00:09:14.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:14.926 10:14:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:14.926 10:14:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59236 /var/tmp/spdk-nbd.sock 00:09:14.927 10:14:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59236 ']' 00:09:14.927 10:14:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:14.927 10:14:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.927 10:14:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:14.927 10:14:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.927 10:14:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:14.927 10:14:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.927 10:14:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:14.927 10:14:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:15.185 Malloc0 00:09:15.185 10:14:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:15.444 Malloc1 00:09:15.444 10:14:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.444 10:14:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:15.703 /dev/nbd0 00:09:15.703 10:14:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:15.703 10:14:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:15.703 1+0 records in 00:09:15.703 1+0 records out 00:09:15.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247775 s, 16.5 MB/s 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:15.703 10:14:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:15.703 10:14:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.703 10:14:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.703 10:14:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:16.012 /dev/nbd1 00:09:16.012 10:14:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:16.012 10:14:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:16.012 10:14:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:16.012 10:14:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:16.012 10:14:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:16.012 10:14:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:16.012 10:14:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:16.012 10:14:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:16.012 10:14:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:16.012 10:14:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:16.013 10:14:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:16.013 1+0 records in 00:09:16.013 1+0 records out 00:09:16.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380279 s, 10.8 MB/s 00:09:16.013 10:14:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:16.013 10:14:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:16.013 10:14:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:16.013 10:14:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:16.013 10:14:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:16.013 10:14:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:16.013 10:14:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:16.013 10:14:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:16.013 10:14:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.013 10:14:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:16.013 10:14:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:16.013 { 00:09:16.013 "nbd_device": "/dev/nbd0", 00:09:16.013 "bdev_name": "Malloc0" 00:09:16.013 }, 00:09:16.013 { 00:09:16.013 "nbd_device": "/dev/nbd1", 00:09:16.013 "bdev_name": "Malloc1" 00:09:16.013 } 00:09:16.013 ]' 00:09:16.013 10:14:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:16.013 { 00:09:16.013 "nbd_device": "/dev/nbd0", 00:09:16.013 "bdev_name": "Malloc0" 00:09:16.013 }, 00:09:16.013 { 00:09:16.013 "nbd_device": "/dev/nbd1", 00:09:16.013 "bdev_name": "Malloc1" 00:09:16.013 } 00:09:16.013 ]' 00:09:16.013 10:14:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:16.271 /dev/nbd1' 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:16.271 /dev/nbd1' 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:16.271 256+0 records in 00:09:16.271 256+0 records out 00:09:16.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101688 s, 103 MB/s 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:16.271 256+0 records in 00:09:16.271 256+0 records out 00:09:16.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304428 s, 34.4 MB/s 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:16.271 256+0 records in 00:09:16.271 256+0 records out 00:09:16.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299057 s, 35.1 MB/s 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:16.271 10:14:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:16.272 10:14:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:16.272 10:14:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.272 10:14:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:16.272 10:14:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:16.272 10:14:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:16.272 10:14:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.272 10:14:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:16.530 10:14:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:16.530 10:14:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:16.530 10:14:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:16.530 10:14:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.530 10:14:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.531 10:14:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:16.531 10:14:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:16.531 10:14:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.531 10:14:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.531 10:14:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:16.789 10:14:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:16.790 10:14:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:16.790 10:14:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:16.790 10:14:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.790 10:14:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.790 10:14:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:16.790 10:14:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:16.790 10:14:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.790 10:14:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:16.790 10:14:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.790 10:14:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:17.049 10:14:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:17.049 10:14:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:17.049 10:14:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:17.049 10:14:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:17.049 10:14:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:17.049 10:14:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:17.049 10:14:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:17.049 10:14:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:17.049 10:14:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:17.049 10:14:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:17.049 10:14:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:17.049 10:14:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:17.049 10:14:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:17.617 10:14:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:18.646 [2024-11-25 10:14:25.582214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.646 [2024-11-25 10:14:25.697442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.646 [2024-11-25 10:14:25.697442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.905 [2024-11-25 10:14:25.897538] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:18.905 [2024-11-25 10:14:25.897608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:20.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:20.812 10:14:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59236 /var/tmp/spdk-nbd.sock 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59236 ']' 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:20.812 10:14:27 event.app_repeat -- event/event.sh@39 -- # killprocess 59236 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59236 ']' 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59236 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59236 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.812 killing process with pid 59236 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59236' 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59236 00:09:20.812 10:14:27 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59236 00:09:21.748 spdk_app_start is called in Round 0. 00:09:21.748 Shutdown signal received, stop current app iteration 00:09:21.748 Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 reinitialization... 00:09:21.748 spdk_app_start is called in Round 1. 00:09:21.748 Shutdown signal received, stop current app iteration 00:09:21.748 Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 reinitialization... 00:09:21.748 spdk_app_start is called in Round 2. 00:09:21.748 Shutdown signal received, stop current app iteration 00:09:21.748 Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 reinitialization... 00:09:21.748 spdk_app_start is called in Round 3. 00:09:21.748 Shutdown signal received, stop current app iteration 00:09:21.748 10:14:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:21.748 10:14:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:21.748 00:09:21.748 real 0m19.828s 00:09:21.748 user 0m42.359s 00:09:21.748 sys 0m3.235s 00:09:21.748 10:14:28 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.748 10:14:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:21.748 ************************************ 00:09:21.748 END TEST app_repeat 00:09:21.748 ************************************ 00:09:21.748 10:14:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:21.748 10:14:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:21.748 10:14:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.748 10:14:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.748 10:14:28 event -- common/autotest_common.sh@10 -- # set +x 00:09:21.748 ************************************ 00:09:21.748 START TEST cpu_locks 00:09:21.748 ************************************ 00:09:21.748 10:14:28 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:22.007 * Looking for test storage... 00:09:22.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:22.007 10:14:28 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.007 10:14:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.007 10:14:28 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.007 10:14:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.007 10:14:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:22.007 10:14:29 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.007 10:14:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.007 --rc genhtml_branch_coverage=1 00:09:22.007 --rc genhtml_function_coverage=1 00:09:22.007 --rc genhtml_legend=1 00:09:22.007 --rc geninfo_all_blocks=1 00:09:22.007 --rc geninfo_unexecuted_blocks=1 00:09:22.007 00:09:22.007 ' 00:09:22.007 10:14:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.007 --rc genhtml_branch_coverage=1 00:09:22.007 --rc genhtml_function_coverage=1 00:09:22.007 --rc genhtml_legend=1 00:09:22.007 --rc geninfo_all_blocks=1 00:09:22.007 --rc geninfo_unexecuted_blocks=1 00:09:22.007 00:09:22.007 ' 00:09:22.007 10:14:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.007 --rc genhtml_branch_coverage=1 00:09:22.007 --rc genhtml_function_coverage=1 00:09:22.007 --rc genhtml_legend=1 00:09:22.007 --rc geninfo_all_blocks=1 00:09:22.007 --rc geninfo_unexecuted_blocks=1 00:09:22.007 00:09:22.007 ' 00:09:22.007 10:14:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.007 --rc genhtml_branch_coverage=1 00:09:22.007 --rc genhtml_function_coverage=1 00:09:22.007 --rc genhtml_legend=1 00:09:22.007 --rc geninfo_all_blocks=1 00:09:22.007 --rc geninfo_unexecuted_blocks=1 00:09:22.007 00:09:22.007 ' 00:09:22.007 10:14:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:22.007 10:14:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:22.007 10:14:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:22.007 10:14:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:22.007 10:14:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.007 10:14:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.007 10:14:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.007 ************************************ 00:09:22.007 START TEST default_locks 00:09:22.007 ************************************ 00:09:22.007 10:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:22.007 10:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59685 00:09:22.007 10:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:22.007 10:14:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59685 00:09:22.007 10:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59685 ']' 00:09:22.007 10:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.007 10:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.008 10:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.008 10:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.008 10:14:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.277 [2024-11-25 10:14:29.169831] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:09:22.278 [2024-11-25 10:14:29.170380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59685 ] 00:09:22.278 [2024-11-25 10:14:29.363681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.535 [2024-11-25 10:14:29.484882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.471 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.471 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:23.471 10:14:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59685 00:09:23.471 10:14:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59685 00:09:23.471 10:14:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59685 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59685 ']' 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59685 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59685 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.038 killing process with pid 59685 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59685' 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59685 00:09:24.038 10:14:30 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59685 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59685 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59685 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59685 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59685 ']' 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:26.573 ERROR: process (pid: 59685) is no longer running 00:09:26.573 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59685) - No such process 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:26.573 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:26.574 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:26.574 10:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:26.574 ************************************ 00:09:26.574 END TEST default_locks 00:09:26.574 ************************************ 00:09:26.574 10:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:26.574 10:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:26.574 10:14:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:26.574 00:09:26.574 real 0m4.245s 00:09:26.574 user 0m4.199s 00:09:26.574 sys 0m0.725s 00:09:26.574 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.574 10:14:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:26.574 10:14:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:26.574 10:14:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.574 10:14:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.574 10:14:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:26.574 ************************************ 00:09:26.574 START TEST default_locks_via_rpc 00:09:26.574 ************************************ 00:09:26.574 10:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:26.574 10:14:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59766 00:09:26.574 10:14:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:26.574 10:14:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59766 00:09:26.574 10:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59766 ']' 00:09:26.574 10:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.574 10:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.574 10:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.574 10:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.574 10:14:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.574 [2024-11-25 10:14:33.476662] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:09:26.574 [2024-11-25 10:14:33.476803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59766 ] 00:09:26.574 [2024-11-25 10:14:33.656698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.834 [2024-11-25 10:14:33.775412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59766 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:27.769 10:14:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59766 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59766 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59766 ']' 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59766 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59766 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.335 killing process with pid 59766 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59766' 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59766 00:09:28.335 10:14:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59766 00:09:30.924 00:09:30.924 real 0m4.248s 00:09:30.924 user 0m4.217s 00:09:30.924 sys 0m0.719s 00:09:30.924 10:14:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.924 10:14:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.924 ************************************ 00:09:30.924 END TEST default_locks_via_rpc 00:09:30.924 ************************************ 00:09:30.924 10:14:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:30.924 10:14:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.924 10:14:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.924 10:14:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.924 ************************************ 00:09:30.924 START TEST non_locking_app_on_locked_coremask 00:09:30.924 ************************************ 00:09:30.924 10:14:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:30.924 10:14:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59840 00:09:30.924 10:14:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:30.924 10:14:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59840 /var/tmp/spdk.sock 00:09:30.924 10:14:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59840 ']' 00:09:30.924 10:14:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.924 10:14:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.924 10:14:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.924 10:14:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.924 10:14:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.924 [2024-11-25 10:14:37.797813] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:09:30.924 [2024-11-25 10:14:37.797942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59840 ] 00:09:30.924 [2024-11-25 10:14:37.980401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.183 [2024-11-25 10:14:38.099300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59856 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59856 /var/tmp/spdk2.sock 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59856 ']' 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.116 10:14:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.116 [2024-11-25 10:14:39.016520] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:09:32.116 [2024-11-25 10:14:39.016646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59856 ] 00:09:32.116 [2024-11-25 10:14:39.204768] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:32.116 [2024-11-25 10:14:39.204842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.373 [2024-11-25 10:14:39.452205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.908 10:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.908 10:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:34.908 10:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59840 00:09:34.908 10:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59840 00:09:34.908 10:14:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:35.484 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59840 00:09:35.484 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59840 ']' 00:09:35.484 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59840 00:09:35.484 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:35.484 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.484 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59840 00:09:35.742 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.742 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.742 killing process with pid 59840 00:09:35.742 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59840' 00:09:35.742 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59840 00:09:35.742 10:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59840 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59856 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59856 ']' 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59856 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59856 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.003 killing process with pid 59856 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59856' 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59856 00:09:41.003 10:14:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59856 00:09:42.914 ************************************ 00:09:42.914 END TEST non_locking_app_on_locked_coremask 00:09:42.914 ************************************ 00:09:42.914 00:09:42.914 real 0m12.248s 00:09:42.914 user 0m12.603s 00:09:42.914 sys 0m1.472s 00:09:42.914 10:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.914 10:14:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:42.914 10:14:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:42.914 10:14:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.914 10:14:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.914 10:14:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:42.914 ************************************ 00:09:42.914 START TEST locking_app_on_unlocked_coremask 00:09:42.914 ************************************ 00:09:42.914 10:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:42.914 10:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60018 00:09:42.914 10:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:42.914 10:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60018 /var/tmp/spdk.sock 00:09:42.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.914 10:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60018 ']' 00:09:42.914 10:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.914 10:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.914 10:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.914 10:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.914 10:14:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:43.173 [2024-11-25 10:14:50.124227] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:09:43.173 [2024-11-25 10:14:50.124356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60018 ] 00:09:43.485 [2024-11-25 10:14:50.307663] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:43.485 [2024-11-25 10:14:50.307725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.485 [2024-11-25 10:14:50.428786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60034 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60034 /var/tmp/spdk2.sock 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60034 ']' 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:44.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.420 10:14:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:44.420 [2024-11-25 10:14:51.435123] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:09:44.420 [2024-11-25 10:14:51.435528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60034 ] 00:09:44.679 [2024-11-25 10:14:51.625342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.936 [2024-11-25 10:14:51.890992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.473 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.473 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:47.473 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60034 00:09:47.473 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60034 00:09:47.473 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60018 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60018 ']' 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60018 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60018 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.042 killing process with pid 60018 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60018' 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60018 00:09:48.042 10:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60018 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60034 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60034 ']' 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60034 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60034 00:09:53.320 killing process with pid 60034 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60034' 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60034 00:09:53.320 10:14:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60034 00:09:55.214 ************************************ 00:09:55.214 END TEST locking_app_on_unlocked_coremask 00:09:55.214 ************************************ 00:09:55.214 00:09:55.214 real 0m12.259s 00:09:55.214 user 0m12.631s 00:09:55.214 sys 0m1.448s 00:09:55.214 10:15:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.214 10:15:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:55.473 10:15:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:55.473 10:15:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.473 10:15:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.473 10:15:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:55.473 ************************************ 00:09:55.473 START TEST locking_app_on_locked_coremask 00:09:55.473 ************************************ 00:09:55.473 10:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:55.473 10:15:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60190 00:09:55.473 10:15:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:55.473 10:15:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60190 /var/tmp/spdk.sock 00:09:55.473 10:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60190 ']' 00:09:55.473 10:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.473 10:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.473 10:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.473 10:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.473 10:15:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:55.473 [2024-11-25 10:15:02.451993] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:09:55.473 [2024-11-25 10:15:02.452890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60190 ] 00:09:55.732 [2024-11-25 10:15:02.623292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.732 [2024-11-25 10:15:02.739947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60206 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60206 /var/tmp/spdk2.sock 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60206 /var/tmp/spdk2.sock 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60206 /var/tmp/spdk2.sock 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60206 ']' 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:56.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.693 10:15:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.693 [2024-11-25 10:15:03.675352] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:09:56.693 [2024-11-25 10:15:03.675470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60206 ] 00:09:56.952 [2024-11-25 10:15:03.860038] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60190 has claimed it. 00:09:56.953 [2024-11-25 10:15:03.860096] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:57.212 ERROR: process (pid: 60206) is no longer running 00:09:57.212 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60206) - No such process 00:09:57.212 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.212 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:57.212 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:57.212 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.212 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.212 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.212 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60190 00:09:57.212 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60190 00:09:57.213 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60190 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60190 ']' 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60190 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60190 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60190' 00:09:57.781 killing process with pid 60190 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60190 00:09:57.781 10:15:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60190 00:10:00.315 ************************************ 00:10:00.315 END TEST locking_app_on_locked_coremask 00:10:00.315 ************************************ 00:10:00.315 00:10:00.315 real 0m4.862s 00:10:00.315 user 0m5.064s 00:10:00.315 sys 0m0.851s 00:10:00.315 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.315 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:00.315 10:15:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:00.315 10:15:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.315 10:15:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.315 10:15:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.315 ************************************ 00:10:00.315 START TEST locking_overlapped_coremask 00:10:00.315 ************************************ 00:10:00.315 10:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:00.315 10:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60275 00:10:00.315 10:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:00.315 10:15:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60275 /var/tmp/spdk.sock 00:10:00.315 10:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60275 ']' 00:10:00.315 10:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.315 10:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.315 10:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.315 10:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.315 10:15:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:00.315 [2024-11-25 10:15:07.394133] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:00.315 [2024-11-25 10:15:07.394262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60275 ] 00:10:00.573 [2024-11-25 10:15:07.579756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.832 [2024-11-25 10:15:07.703722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.832 [2024-11-25 10:15:07.703865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.832 [2024-11-25 10:15:07.703906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60302 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60302 /var/tmp/spdk2.sock 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60302 /var/tmp/spdk2.sock 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60302 /var/tmp/spdk2.sock 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60302 ']' 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:01.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.769 10:15:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:01.769 [2024-11-25 10:15:08.658456] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:01.769 [2024-11-25 10:15:08.658593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 00:10:01.769 [2024-11-25 10:15:08.843739] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60275 has claimed it. 00:10:01.769 [2024-11-25 10:15:08.843837] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:02.339 ERROR: process (pid: 60302) is no longer running 00:10:02.339 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60302) - No such process 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60275 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60275 ']' 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60275 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60275 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60275' 00:10:02.339 killing process with pid 60275 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60275 00:10:02.339 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60275 00:10:04.872 00:10:04.872 real 0m4.472s 00:10:04.872 user 0m12.121s 00:10:04.872 sys 0m0.609s 00:10:04.872 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.872 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:04.872 ************************************ 00:10:04.872 END TEST locking_overlapped_coremask 00:10:04.872 ************************************ 00:10:04.873 10:15:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:04.873 10:15:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.873 10:15:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.873 10:15:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:04.873 ************************************ 00:10:04.873 START TEST locking_overlapped_coremask_via_rpc 00:10:04.873 ************************************ 00:10:04.873 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:04.873 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60366 00:10:04.873 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:04.873 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60366 /var/tmp/spdk.sock 00:10:04.873 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60366 ']' 00:10:04.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.873 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.873 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.873 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.873 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.873 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.873 [2024-11-25 10:15:11.943053] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:04.873 [2024-11-25 10:15:11.943183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60366 ] 00:10:05.131 [2024-11-25 10:15:12.130164] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:05.131 [2024-11-25 10:15:12.130217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:05.390 [2024-11-25 10:15:12.255980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.390 [2024-11-25 10:15:12.256113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.390 [2024-11-25 10:15:12.256142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60384 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60384 /var/tmp/spdk2.sock 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60384 ']' 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:06.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.334 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.334 [2024-11-25 10:15:13.273229] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:06.334 [2024-11-25 10:15:13.273559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60384 ] 00:10:06.593 [2024-11-25 10:15:13.484249] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:06.593 [2024-11-25 10:15:13.484306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.852 [2024-11-25 10:15:13.725598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.852 [2024-11-25 10:15:13.728576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.852 [2024-11-25 10:15:13.728582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.757 [2024-11-25 10:15:15.812756] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60366 has claimed it. 00:10:08.757 request: 00:10:08.757 { 00:10:08.757 "method": "framework_enable_cpumask_locks", 00:10:08.757 "req_id": 1 00:10:08.757 } 00:10:08.757 Got JSON-RPC error response 00:10:08.757 response: 00:10:08.757 { 00:10:08.757 "code": -32603, 00:10:08.757 "message": "Failed to claim CPU core: 2" 00:10:08.757 } 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60366 /var/tmp/spdk.sock 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60366 ']' 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.757 10:15:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.016 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.016 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:09.016 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60384 /var/tmp/spdk2.sock 00:10:09.016 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60384 ']' 00:10:09.016 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:09.016 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.016 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:09.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:09.016 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.016 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.584 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.584 ************************************ 00:10:09.584 END TEST locking_overlapped_coremask_via_rpc 00:10:09.584 ************************************ 00:10:09.584 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:09.584 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:09.584 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:09.584 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:09.584 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:09.584 00:10:09.584 real 0m4.595s 00:10:09.584 user 0m1.409s 00:10:09.584 sys 0m0.228s 00:10:09.584 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.584 10:15:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.584 10:15:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:09.584 10:15:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60366 ]] 00:10:09.584 10:15:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60366 00:10:09.584 10:15:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60366 ']' 00:10:09.584 10:15:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60366 00:10:09.584 10:15:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:09.584 10:15:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.584 10:15:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60366 00:10:09.584 10:15:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.584 10:15:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.584 10:15:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60366' 00:10:09.584 killing process with pid 60366 00:10:09.584 10:15:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60366 00:10:09.584 10:15:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60366 00:10:12.156 10:15:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60384 ]] 00:10:12.156 10:15:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60384 00:10:12.156 10:15:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60384 ']' 00:10:12.156 10:15:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60384 00:10:12.156 10:15:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:12.156 10:15:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.156 10:15:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60384 00:10:12.156 killing process with pid 60384 00:10:12.156 10:15:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:12.156 10:15:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:12.156 10:15:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60384' 00:10:12.156 10:15:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60384 00:10:12.156 10:15:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60384 00:10:14.685 10:15:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:14.685 Process with pid 60366 is not found 00:10:14.685 Process with pid 60384 is not found 00:10:14.685 10:15:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:14.685 10:15:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60366 ]] 00:10:14.685 10:15:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60366 00:10:14.685 10:15:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60366 ']' 00:10:14.685 10:15:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60366 00:10:14.685 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60366) - No such process 00:10:14.685 10:15:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60366 is not found' 00:10:14.685 10:15:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60384 ]] 00:10:14.685 10:15:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60384 00:10:14.685 10:15:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60384 ']' 00:10:14.685 10:15:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60384 00:10:14.685 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60384) - No such process 00:10:14.685 10:15:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60384 is not found' 00:10:14.685 10:15:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:14.685 00:10:14.685 real 0m52.728s 00:10:14.685 user 1m29.751s 00:10:14.685 sys 0m7.352s 00:10:14.685 10:15:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.685 10:15:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.685 ************************************ 00:10:14.685 END TEST cpu_locks 00:10:14.685 ************************************ 00:10:14.685 00:10:14.685 real 1m23.796s 00:10:14.685 user 2m29.207s 00:10:14.685 sys 0m11.905s 00:10:14.685 10:15:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.685 10:15:21 event -- common/autotest_common.sh@10 -- # set +x 00:10:14.685 ************************************ 00:10:14.685 END TEST event 00:10:14.685 ************************************ 00:10:14.685 10:15:21 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:14.685 10:15:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:14.685 10:15:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.685 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:10:14.685 ************************************ 00:10:14.685 START TEST thread 00:10:14.685 ************************************ 00:10:14.685 10:15:21 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:14.685 * Looking for test storage... 00:10:14.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.945 10:15:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.945 10:15:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.945 10:15:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.945 10:15:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.945 10:15:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.945 10:15:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.945 10:15:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.945 10:15:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.945 10:15:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.945 10:15:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.945 10:15:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.945 10:15:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:14.945 10:15:21 thread -- scripts/common.sh@345 -- # : 1 00:10:14.945 10:15:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.945 10:15:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.945 10:15:21 thread -- scripts/common.sh@365 -- # decimal 1 00:10:14.945 10:15:21 thread -- scripts/common.sh@353 -- # local d=1 00:10:14.945 10:15:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.945 10:15:21 thread -- scripts/common.sh@355 -- # echo 1 00:10:14.945 10:15:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.945 10:15:21 thread -- scripts/common.sh@366 -- # decimal 2 00:10:14.945 10:15:21 thread -- scripts/common.sh@353 -- # local d=2 00:10:14.945 10:15:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.945 10:15:21 thread -- scripts/common.sh@355 -- # echo 2 00:10:14.945 10:15:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.945 10:15:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.945 10:15:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.945 10:15:21 thread -- scripts/common.sh@368 -- # return 0 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.945 --rc genhtml_branch_coverage=1 00:10:14.945 --rc genhtml_function_coverage=1 00:10:14.945 --rc genhtml_legend=1 00:10:14.945 --rc geninfo_all_blocks=1 00:10:14.945 --rc geninfo_unexecuted_blocks=1 00:10:14.945 00:10:14.945 ' 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.945 --rc genhtml_branch_coverage=1 00:10:14.945 --rc genhtml_function_coverage=1 00:10:14.945 --rc genhtml_legend=1 00:10:14.945 --rc geninfo_all_blocks=1 00:10:14.945 --rc geninfo_unexecuted_blocks=1 00:10:14.945 00:10:14.945 ' 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.945 --rc genhtml_branch_coverage=1 00:10:14.945 --rc genhtml_function_coverage=1 00:10:14.945 --rc genhtml_legend=1 00:10:14.945 --rc geninfo_all_blocks=1 00:10:14.945 --rc geninfo_unexecuted_blocks=1 00:10:14.945 00:10:14.945 ' 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.945 --rc genhtml_branch_coverage=1 00:10:14.945 --rc genhtml_function_coverage=1 00:10:14.945 --rc genhtml_legend=1 00:10:14.945 --rc geninfo_all_blocks=1 00:10:14.945 --rc geninfo_unexecuted_blocks=1 00:10:14.945 00:10:14.945 ' 00:10:14.945 10:15:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.945 10:15:21 thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.945 ************************************ 00:10:14.945 START TEST thread_poller_perf 00:10:14.945 ************************************ 00:10:14.945 10:15:21 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:14.945 [2024-11-25 10:15:21.964087] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:14.945 [2024-11-25 10:15:21.964204] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60590 ] 00:10:15.204 [2024-11-25 10:15:22.146238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.204 [2024-11-25 10:15:22.260031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.204 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:16.578 [2024-11-25T10:15:23.690Z] ====================================== 00:10:16.578 [2024-11-25T10:15:23.690Z] busy:2499037130 (cyc) 00:10:16.578 [2024-11-25T10:15:23.690Z] total_run_count: 382000 00:10:16.578 [2024-11-25T10:15:23.690Z] tsc_hz: 2490000000 (cyc) 00:10:16.578 [2024-11-25T10:15:23.690Z] ====================================== 00:10:16.578 [2024-11-25T10:15:23.690Z] poller_cost: 6541 (cyc), 2626 (nsec) 00:10:16.578 00:10:16.578 real 0m1.583s 00:10:16.579 user 0m1.370s 00:10:16.579 sys 0m0.105s 00:10:16.579 10:15:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.579 10:15:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:16.579 ************************************ 00:10:16.579 END TEST thread_poller_perf 00:10:16.579 ************************************ 00:10:16.579 10:15:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:16.579 10:15:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:16.579 10:15:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.579 10:15:23 thread -- common/autotest_common.sh@10 -- # set +x 00:10:16.579 ************************************ 00:10:16.579 START TEST thread_poller_perf 00:10:16.579 ************************************ 00:10:16.579 10:15:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:16.579 [2024-11-25 10:15:23.627242] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:16.579 [2024-11-25 10:15:23.627362] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60621 ] 00:10:16.837 [2024-11-25 10:15:23.806416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.837 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:16.837 [2024-11-25 10:15:23.914304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.213 [2024-11-25T10:15:25.325Z] ====================================== 00:10:18.213 [2024-11-25T10:15:25.325Z] busy:2493840162 (cyc) 00:10:18.213 [2024-11-25T10:15:25.325Z] total_run_count: 5106000 00:10:18.213 [2024-11-25T10:15:25.325Z] tsc_hz: 2490000000 (cyc) 00:10:18.213 [2024-11-25T10:15:25.325Z] ====================================== 00:10:18.213 [2024-11-25T10:15:25.325Z] poller_cost: 488 (cyc), 195 (nsec) 00:10:18.213 00:10:18.213 real 0m1.571s 00:10:18.213 user 0m1.358s 00:10:18.213 sys 0m0.105s 00:10:18.213 ************************************ 00:10:18.213 END TEST thread_poller_perf 00:10:18.213 ************************************ 00:10:18.213 10:15:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.213 10:15:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:18.213 10:15:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:18.213 ************************************ 00:10:18.213 END TEST thread 00:10:18.213 ************************************ 00:10:18.213 00:10:18.213 real 0m3.532s 00:10:18.213 user 0m2.901s 00:10:18.213 sys 0m0.424s 00:10:18.213 10:15:25 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.213 10:15:25 thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.213 10:15:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:18.213 10:15:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:18.213 10:15:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.213 10:15:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.213 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:10:18.213 ************************************ 00:10:18.213 START TEST app_cmdline 00:10:18.213 ************************************ 00:10:18.213 10:15:25 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:18.472 * Looking for test storage... 00:10:18.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.472 10:15:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:18.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.472 --rc genhtml_branch_coverage=1 00:10:18.472 --rc genhtml_function_coverage=1 00:10:18.472 --rc genhtml_legend=1 00:10:18.472 --rc geninfo_all_blocks=1 00:10:18.472 --rc geninfo_unexecuted_blocks=1 00:10:18.472 00:10:18.472 ' 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:18.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.472 --rc genhtml_branch_coverage=1 00:10:18.472 --rc genhtml_function_coverage=1 00:10:18.472 --rc genhtml_legend=1 00:10:18.472 --rc geninfo_all_blocks=1 00:10:18.472 --rc geninfo_unexecuted_blocks=1 00:10:18.472 00:10:18.472 ' 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:18.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.472 --rc genhtml_branch_coverage=1 00:10:18.472 --rc genhtml_function_coverage=1 00:10:18.472 --rc genhtml_legend=1 00:10:18.472 --rc geninfo_all_blocks=1 00:10:18.472 --rc geninfo_unexecuted_blocks=1 00:10:18.472 00:10:18.472 ' 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:18.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.472 --rc genhtml_branch_coverage=1 00:10:18.472 --rc genhtml_function_coverage=1 00:10:18.472 --rc genhtml_legend=1 00:10:18.472 --rc geninfo_all_blocks=1 00:10:18.472 --rc geninfo_unexecuted_blocks=1 00:10:18.472 00:10:18.472 ' 00:10:18.472 10:15:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:18.472 10:15:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60710 00:10:18.472 10:15:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:18.472 10:15:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60710 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60710 ']' 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.472 10:15:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:18.730 [2024-11-25 10:15:25.599354] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:18.730 [2024-11-25 10:15:25.599503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60710 ] 00:10:18.730 [2024-11-25 10:15:25.781752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.988 [2024-11-25 10:15:25.896809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:19.924 { 00:10:19.924 "version": "SPDK v25.01-pre git sha1 eb055bb93", 00:10:19.924 "fields": { 00:10:19.924 "major": 25, 00:10:19.924 "minor": 1, 00:10:19.924 "patch": 0, 00:10:19.924 "suffix": "-pre", 00:10:19.924 "commit": "eb055bb93" 00:10:19.924 } 00:10:19.924 } 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:19.924 10:15:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.924 10:15:26 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:19.925 10:15:26 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:20.182 request: 00:10:20.182 { 00:10:20.182 "method": "env_dpdk_get_mem_stats", 00:10:20.182 "req_id": 1 00:10:20.182 } 00:10:20.182 Got JSON-RPC error response 00:10:20.182 response: 00:10:20.182 { 00:10:20.182 "code": -32601, 00:10:20.182 "message": "Method not found" 00:10:20.182 } 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:20.182 10:15:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60710 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60710 ']' 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60710 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60710 00:10:20.182 killing process with pid 60710 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60710' 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 60710 00:10:20.182 10:15:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 60710 00:10:22.715 ************************************ 00:10:22.715 END TEST app_cmdline 00:10:22.715 ************************************ 00:10:22.715 00:10:22.715 real 0m4.356s 00:10:22.715 user 0m4.534s 00:10:22.715 sys 0m0.630s 00:10:22.715 10:15:29 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.715 10:15:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:22.715 10:15:29 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:22.715 10:15:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:22.715 10:15:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.715 10:15:29 -- common/autotest_common.sh@10 -- # set +x 00:10:22.715 ************************************ 00:10:22.715 START TEST version 00:10:22.715 ************************************ 00:10:22.715 10:15:29 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:22.715 * Looking for test storage... 00:10:22.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:22.715 10:15:29 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.973 10:15:29 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.973 10:15:29 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.973 10:15:29 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.973 10:15:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.973 10:15:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.973 10:15:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.973 10:15:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.973 10:15:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.973 10:15:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.973 10:15:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.973 10:15:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.973 10:15:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.973 10:15:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.973 10:15:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.973 10:15:29 version -- scripts/common.sh@344 -- # case "$op" in 00:10:22.973 10:15:29 version -- scripts/common.sh@345 -- # : 1 00:10:22.973 10:15:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.973 10:15:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.973 10:15:29 version -- scripts/common.sh@365 -- # decimal 1 00:10:22.973 10:15:29 version -- scripts/common.sh@353 -- # local d=1 00:10:22.973 10:15:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.973 10:15:29 version -- scripts/common.sh@355 -- # echo 1 00:10:22.973 10:15:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.973 10:15:29 version -- scripts/common.sh@366 -- # decimal 2 00:10:22.973 10:15:29 version -- scripts/common.sh@353 -- # local d=2 00:10:22.973 10:15:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.973 10:15:29 version -- scripts/common.sh@355 -- # echo 2 00:10:22.973 10:15:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.973 10:15:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.973 10:15:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.973 10:15:29 version -- scripts/common.sh@368 -- # return 0 00:10:22.973 10:15:29 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.973 10:15:29 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.973 --rc genhtml_branch_coverage=1 00:10:22.973 --rc genhtml_function_coverage=1 00:10:22.973 --rc genhtml_legend=1 00:10:22.973 --rc geninfo_all_blocks=1 00:10:22.973 --rc geninfo_unexecuted_blocks=1 00:10:22.973 00:10:22.973 ' 00:10:22.973 10:15:29 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.973 --rc genhtml_branch_coverage=1 00:10:22.973 --rc genhtml_function_coverage=1 00:10:22.973 --rc genhtml_legend=1 00:10:22.973 --rc geninfo_all_blocks=1 00:10:22.973 --rc geninfo_unexecuted_blocks=1 00:10:22.973 00:10:22.973 ' 00:10:22.973 10:15:29 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.973 --rc genhtml_branch_coverage=1 00:10:22.973 --rc genhtml_function_coverage=1 00:10:22.973 --rc genhtml_legend=1 00:10:22.973 --rc geninfo_all_blocks=1 00:10:22.973 --rc geninfo_unexecuted_blocks=1 00:10:22.973 00:10:22.973 ' 00:10:22.973 10:15:29 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.973 --rc genhtml_branch_coverage=1 00:10:22.973 --rc genhtml_function_coverage=1 00:10:22.974 --rc genhtml_legend=1 00:10:22.974 --rc geninfo_all_blocks=1 00:10:22.974 --rc geninfo_unexecuted_blocks=1 00:10:22.974 00:10:22.974 ' 00:10:22.974 10:15:29 version -- app/version.sh@17 -- # get_header_version major 00:10:22.974 10:15:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:22.974 10:15:29 version -- app/version.sh@14 -- # tr -d '"' 00:10:22.974 10:15:29 version -- app/version.sh@14 -- # cut -f2 00:10:22.974 10:15:29 version -- app/version.sh@17 -- # major=25 00:10:22.974 10:15:29 version -- app/version.sh@18 -- # get_header_version minor 00:10:22.974 10:15:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:22.974 10:15:29 version -- app/version.sh@14 -- # cut -f2 00:10:22.974 10:15:29 version -- app/version.sh@14 -- # tr -d '"' 00:10:22.974 10:15:29 version -- app/version.sh@18 -- # minor=1 00:10:22.974 10:15:29 version -- app/version.sh@19 -- # get_header_version patch 00:10:22.974 10:15:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:22.974 10:15:29 version -- app/version.sh@14 -- # cut -f2 00:10:22.974 10:15:29 version -- app/version.sh@14 -- # tr -d '"' 00:10:22.974 10:15:29 version -- app/version.sh@19 -- # patch=0 00:10:22.974 10:15:29 version -- app/version.sh@20 -- # get_header_version suffix 00:10:22.974 10:15:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:22.974 10:15:29 version -- app/version.sh@14 -- # cut -f2 00:10:22.974 10:15:29 version -- app/version.sh@14 -- # tr -d '"' 00:10:22.974 10:15:29 version -- app/version.sh@20 -- # suffix=-pre 00:10:22.974 10:15:29 version -- app/version.sh@22 -- # version=25.1 00:10:22.974 10:15:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:22.974 10:15:29 version -- app/version.sh@28 -- # version=25.1rc0 00:10:22.974 10:15:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:22.974 10:15:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:22.974 10:15:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:22.974 10:15:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:22.974 ************************************ 00:10:22.974 END TEST version 00:10:22.974 ************************************ 00:10:22.974 00:10:22.974 real 0m0.319s 00:10:22.974 user 0m0.208s 00:10:22.974 sys 0m0.166s 00:10:22.974 10:15:30 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.974 10:15:30 version -- common/autotest_common.sh@10 -- # set +x 00:10:22.974 10:15:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:22.974 10:15:30 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:22.974 10:15:30 -- spdk/autotest.sh@194 -- # uname -s 00:10:22.974 10:15:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:22.974 10:15:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:22.974 10:15:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:22.974 10:15:30 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:10:22.974 10:15:30 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:22.974 10:15:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.974 10:15:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.974 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:10:23.233 ************************************ 00:10:23.233 START TEST blockdev_nvme 00:10:23.233 ************************************ 00:10:23.233 10:15:30 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:23.233 * Looking for test storage... 00:10:23.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:23.233 10:15:30 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.233 10:15:30 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.233 10:15:30 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.233 10:15:30 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.233 10:15:30 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:10:23.233 10:15:30 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.233 10:15:30 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.233 --rc genhtml_branch_coverage=1 00:10:23.233 --rc genhtml_function_coverage=1 00:10:23.233 --rc genhtml_legend=1 00:10:23.233 --rc geninfo_all_blocks=1 00:10:23.233 --rc geninfo_unexecuted_blocks=1 00:10:23.233 00:10:23.233 ' 00:10:23.233 10:15:30 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.233 --rc genhtml_branch_coverage=1 00:10:23.233 --rc genhtml_function_coverage=1 00:10:23.233 --rc genhtml_legend=1 00:10:23.233 --rc geninfo_all_blocks=1 00:10:23.233 --rc geninfo_unexecuted_blocks=1 00:10:23.233 00:10:23.233 ' 00:10:23.233 10:15:30 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.233 --rc genhtml_branch_coverage=1 00:10:23.233 --rc genhtml_function_coverage=1 00:10:23.233 --rc genhtml_legend=1 00:10:23.233 --rc geninfo_all_blocks=1 00:10:23.233 --rc geninfo_unexecuted_blocks=1 00:10:23.233 00:10:23.233 ' 00:10:23.233 10:15:30 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.233 --rc genhtml_branch_coverage=1 00:10:23.233 --rc genhtml_function_coverage=1 00:10:23.233 --rc genhtml_legend=1 00:10:23.233 --rc geninfo_all_blocks=1 00:10:23.233 --rc geninfo_unexecuted_blocks=1 00:10:23.233 00:10:23.233 ' 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:23.233 10:15:30 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:23.233 10:15:30 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60899 00:10:23.234 10:15:30 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:23.234 10:15:30 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:23.234 10:15:30 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60899 00:10:23.492 10:15:30 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60899 ']' 00:10:23.492 10:15:30 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.492 10:15:30 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.492 10:15:30 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.492 10:15:30 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.492 10:15:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.492 [2024-11-25 10:15:30.450196] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:23.492 [2024-11-25 10:15:30.450331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60899 ] 00:10:23.750 [2024-11-25 10:15:30.634612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.750 [2024-11-25 10:15:30.764217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.685 10:15:31 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.685 10:15:31 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:10:24.685 10:15:31 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:24.685 10:15:31 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:10:24.685 10:15:31 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:24.685 10:15:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:24.685 10:15:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:24.685 10:15:31 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:24.685 10:15:31 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.685 10:15:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:24.943 10:15:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.943 10:15:32 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:24.943 10:15:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.943 10:15:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:24.943 10:15:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.943 10:15:32 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:10:24.943 10:15:32 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:24.943 10:15:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.943 10:15:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.202 10:15:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.202 10:15:32 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:25.202 10:15:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.202 10:15:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.202 10:15:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.202 10:15:32 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:25.202 10:15:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.202 10:15:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.202 10:15:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.202 10:15:32 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:25.202 10:15:32 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:25.202 10:15:32 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.202 10:15:32 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:25.202 10:15:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.202 10:15:32 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.202 10:15:32 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:25.202 10:15:32 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:25.203 10:15:32 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "89facf81-4e25-4f6b-bad0-8cea56eb8077"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "89facf81-4e25-4f6b-bad0-8cea56eb8077",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "e87e696a-3739-4111-a5c4-bbc4946f572e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e87e696a-3739-4111-a5c4-bbc4946f572e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "842452e9-19ac-43e8-8021-89487767e399"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "842452e9-19ac-43e8-8021-89487767e399",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cc715017-d59c-4b81-93a2-bd6b41045615"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cc715017-d59c-4b81-93a2-bd6b41045615",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4d190c21-a54a-4962-9afa-9f219e1d09d1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4d190c21-a54a-4962-9afa-9f219e1d09d1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "dbdc71bf-f29d-4d7d-895f-a9a7c0804235"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dbdc71bf-f29d-4d7d-895f-a9a7c0804235",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:25.203 10:15:32 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:25.203 10:15:32 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:10:25.203 10:15:32 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:25.203 10:15:32 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60899 00:10:25.203 10:15:32 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60899 ']' 00:10:25.203 10:15:32 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60899 00:10:25.203 10:15:32 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:10:25.203 10:15:32 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.203 10:15:32 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60899 00:10:25.463 10:15:32 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.463 10:15:32 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.463 10:15:32 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60899' 00:10:25.463 killing process with pid 60899 00:10:25.463 10:15:32 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60899 00:10:25.463 10:15:32 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60899 00:10:27.998 10:15:34 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:27.998 10:15:34 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:27.998 10:15:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:27.998 10:15:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.998 10:15:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:27.998 ************************************ 00:10:27.998 START TEST bdev_hello_world 00:10:27.998 ************************************ 00:10:27.998 10:15:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:27.998 [2024-11-25 10:15:34.800349] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:27.998 [2024-11-25 10:15:34.800477] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60999 ] 00:10:27.998 [2024-11-25 10:15:34.982308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.998 [2024-11-25 10:15:35.100452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.933 [2024-11-25 10:15:35.765274] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:28.933 [2024-11-25 10:15:35.765328] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:28.933 [2024-11-25 10:15:35.765355] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:28.934 [2024-11-25 10:15:35.768342] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:28.934 [2024-11-25 10:15:35.769046] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:28.934 [2024-11-25 10:15:35.769084] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:28.934 [2024-11-25 10:15:35.769317] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:28.934 00:10:28.934 [2024-11-25 10:15:35.769340] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:29.869 ************************************ 00:10:29.869 END TEST bdev_hello_world 00:10:29.869 ************************************ 00:10:29.869 00:10:29.869 real 0m2.162s 00:10:29.869 user 0m1.816s 00:10:29.869 sys 0m0.237s 00:10:29.869 10:15:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.869 10:15:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:29.869 10:15:36 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:10:29.869 10:15:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.869 10:15:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.869 10:15:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:29.869 ************************************ 00:10:29.869 START TEST bdev_bounds 00:10:29.869 ************************************ 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61041 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61041' 00:10:29.869 Process bdevio pid: 61041 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61041 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61041 ']' 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.869 10:15:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:30.126 [2024-11-25 10:15:37.047278] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:30.126 [2024-11-25 10:15:37.047397] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61041 ] 00:10:30.126 [2024-11-25 10:15:37.228219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.384 [2024-11-25 10:15:37.340446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.384 [2024-11-25 10:15:37.340620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.384 [2024-11-25 10:15:37.340648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.949 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.949 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:30.949 10:15:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:31.209 I/O targets: 00:10:31.209 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:31.209 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:31.209 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:31.209 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:31.209 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:31.209 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:31.209 00:10:31.209 00:10:31.209 CUnit - A unit testing framework for C - Version 2.1-3 00:10:31.209 http://cunit.sourceforge.net/ 00:10:31.209 00:10:31.209 00:10:31.209 Suite: bdevio tests on: Nvme3n1 00:10:31.209 Test: blockdev write read block ...passed 00:10:31.209 Test: blockdev write zeroes read block ...passed 00:10:31.209 Test: blockdev write zeroes read no split ...passed 00:10:31.209 Test: blockdev write zeroes read split ...passed 00:10:31.209 Test: blockdev write zeroes read split partial ...passed 00:10:31.209 Test: blockdev reset ...[2024-11-25 10:15:38.194159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:31.209 [2024-11-25 10:15:38.198204] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:10:31.209 Test: blockdev write read 8 blocks ...uccessful. 00:10:31.209 passed 00:10:31.209 Test: blockdev write read size > 128k ...passed 00:10:31.209 Test: blockdev write read invalid size ...passed 00:10:31.209 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.209 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.209 Test: blockdev write read max offset ...passed 00:10:31.209 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.209 Test: blockdev writev readv 8 blocks ...passed 00:10:31.209 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.209 Test: blockdev writev readv block ...passed 00:10:31.209 Test: blockdev writev readv size > 128k ...passed 00:10:31.209 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.209 Test: blockdev comparev and writev ...[2024-11-25 10:15:38.207408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8e0a000 len:0x1000 00:10:31.209 [2024-11-25 10:15:38.207460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.209 passed 00:10:31.209 Test: blockdev nvme passthru rw ...passed 00:10:31.209 Test: blockdev nvme passthru vendor specific ...passed 00:10:31.209 Test: blockdev nvme admin passthru ...[2024-11-25 10:15:38.208390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:31.209 [2024-11-25 10:15:38.208435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:31.209 passed 00:10:31.209 Test: blockdev copy ...passed 00:10:31.209 Suite: bdevio tests on: Nvme2n3 00:10:31.209 Test: blockdev write read block ...passed 00:10:31.209 Test: blockdev write zeroes read block ...passed 00:10:31.209 Test: blockdev write zeroes read no split ...passed 00:10:31.209 Test: blockdev write zeroes read split ...passed 00:10:31.209 Test: blockdev write zeroes read split partial ...passed 00:10:31.209 Test: blockdev reset ...[2024-11-25 10:15:38.286333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:31.209 [2024-11-25 10:15:38.290661] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:10:31.209 Test: blockdev write read 8 blocks ...passed 00:10:31.209 Test: blockdev write read size > 128k ...uccessful. 00:10:31.209 passed 00:10:31.209 Test: blockdev write read invalid size ...passed 00:10:31.209 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.209 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.209 Test: blockdev write read max offset ...passed 00:10:31.209 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.209 Test: blockdev writev readv 8 blocks ...passed 00:10:31.209 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.209 Test: blockdev writev readv block ...passed 00:10:31.209 Test: blockdev writev readv size > 128k ...passed 00:10:31.209 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.209 Test: blockdev comparev and writev ...[2024-11-25 10:15:38.299665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29c006000 len:0x1000 00:10:31.209 [2024-11-25 10:15:38.299713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.209 passed 00:10:31.209 Test: blockdev nvme passthru rw ...passed 00:10:31.209 Test: blockdev nvme passthru vendor specific ...passed 00:10:31.209 Test: blockdev nvme admin passthru ...[2024-11-25 10:15:38.300644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:31.209 [2024-11-25 10:15:38.300682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:31.209 passed 00:10:31.209 Test: blockdev copy ...passed 00:10:31.209 Suite: bdevio tests on: Nvme2n2 00:10:31.209 Test: blockdev write read block ...passed 00:10:31.209 Test: blockdev write zeroes read block ...passed 00:10:31.468 Test: blockdev write zeroes read no split ...passed 00:10:31.468 Test: blockdev write zeroes read split ...passed 00:10:31.468 Test: blockdev write zeroes read split partial ...passed 00:10:31.468 Test: blockdev reset ...[2024-11-25 10:15:38.378758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:31.468 [2024-11-25 10:15:38.382902] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:10:31.468 Test: blockdev write read 8 blocks ...uccessful. 00:10:31.468 passed 00:10:31.468 Test: blockdev write read size > 128k ...passed 00:10:31.468 Test: blockdev write read invalid size ...passed 00:10:31.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.468 Test: blockdev write read max offset ...passed 00:10:31.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.468 Test: blockdev writev readv 8 blocks ...passed 00:10:31.468 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.468 Test: blockdev writev readv block ...passed 00:10:31.468 Test: blockdev writev readv size > 128k ...passed 00:10:31.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.468 Test: blockdev comparev and writev ...[2024-11-25 10:15:38.393049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d463c000 len:0x1000 00:10:31.468 [2024-11-25 10:15:38.393231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.468 passed 00:10:31.468 Test: blockdev nvme passthru rw ...passed 00:10:31.468 Test: blockdev nvme passthru vendor specific ...[2024-11-25 10:15:38.394391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:31.468 [2024-11-25 10:15:38.394485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:31.468 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:31.468 passed 00:10:31.468 Test: blockdev copy ...passed 00:10:31.468 Suite: bdevio tests on: Nvme2n1 00:10:31.468 Test: blockdev write read block ...passed 00:10:31.468 Test: blockdev write zeroes read block ...passed 00:10:31.468 Test: blockdev write zeroes read no split ...passed 00:10:31.468 Test: blockdev write zeroes read split ...passed 00:10:31.468 Test: blockdev write zeroes read split partial ...passed 00:10:31.468 Test: blockdev reset ...[2024-11-25 10:15:38.472753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:31.468 [2024-11-25 10:15:38.476783] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:10:31.468 Test: blockdev write read 8 blocks ...uccessful. 00:10:31.468 passed 00:10:31.468 Test: blockdev write read size > 128k ...passed 00:10:31.468 Test: blockdev write read invalid size ...passed 00:10:31.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.468 Test: blockdev write read max offset ...passed 00:10:31.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.468 Test: blockdev writev readv 8 blocks ...passed 00:10:31.468 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.468 Test: blockdev writev readv block ...passed 00:10:31.468 Test: blockdev writev readv size > 128k ...passed 00:10:31.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.469 Test: blockdev comparev and writev ...[2024-11-25 10:15:38.486681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4638000 len:0x1000 00:10:31.469 [2024-11-25 10:15:38.486854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.469 passed 00:10:31.469 Test: blockdev nvme passthru rw ...passed 00:10:31.469 Test: blockdev nvme passthru vendor specific ...[2024-11-25 10:15:38.488047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:31.469 [2024-11-25 10:15:38.488143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:31.469 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:31.469 passed 00:10:31.469 Test: blockdev copy ...passed 00:10:31.469 Suite: bdevio tests on: Nvme1n1 00:10:31.469 Test: blockdev write read block ...passed 00:10:31.469 Test: blockdev write zeroes read block ...passed 00:10:31.469 Test: blockdev write zeroes read no split ...passed 00:10:31.469 Test: blockdev write zeroes read split ...passed 00:10:31.469 Test: blockdev write zeroes read split partial ...passed 00:10:31.469 Test: blockdev reset ...[2024-11-25 10:15:38.569776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:31.469 passed 00:10:31.469 Test: blockdev write read 8 blocks ...[2024-11-25 10:15:38.573620] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:31.469 passed 00:10:31.469 Test: blockdev write read size > 128k ...passed 00:10:31.469 Test: blockdev write read invalid size ...passed 00:10:31.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.469 Test: blockdev write read max offset ...passed 00:10:31.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.469 Test: blockdev writev readv 8 blocks ...passed 00:10:31.727 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.727 Test: blockdev writev readv block ...passed 00:10:31.727 Test: blockdev writev readv size > 128k ...passed 00:10:31.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.727 Test: blockdev comparev and writev ...[2024-11-25 10:15:38.582474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4634000 len:0x1000 00:10:31.727 [2024-11-25 10:15:38.582533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.727 passed 00:10:31.727 Test: blockdev nvme passthru rw ...passed 00:10:31.727 Test: blockdev nvme passthru vendor specific ...passed 00:10:31.728 Test: blockdev nvme admin passthru ...[2024-11-25 10:15:38.583447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:31.728 [2024-11-25 10:15:38.583488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:31.728 passed 00:10:31.728 Test: blockdev copy ...passed 00:10:31.728 Suite: bdevio tests on: Nvme0n1 00:10:31.728 Test: blockdev write read block ...passed 00:10:31.728 Test: blockdev write zeroes read block ...passed 00:10:31.728 Test: blockdev write zeroes read no split ...passed 00:10:31.728 Test: blockdev write zeroes read split ...passed 00:10:31.728 Test: blockdev write zeroes read split partial ...passed 00:10:31.728 Test: blockdev reset ...[2024-11-25 10:15:38.662589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:31.728 [2024-11-25 10:15:38.666532] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:10:31.728 Test: blockdev write read 8 blocks ...uccessful. 00:10:31.728 passed 00:10:31.728 Test: blockdev write read size > 128k ...passed 00:10:31.728 Test: blockdev write read invalid size ...passed 00:10:31.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.728 Test: blockdev write read max offset ...passed 00:10:31.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.728 Test: blockdev writev readv 8 blocks ...passed 00:10:31.728 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.728 Test: blockdev writev readv block ...passed 00:10:31.728 Test: blockdev writev readv size > 128k ...passed 00:10:31.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.728 Test: blockdev comparev and writev ...[2024-11-25 10:15:38.675998] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:31.728 separate metadata which is not supported yet. 00:10:31.728 passed 00:10:31.728 Test: blockdev nvme passthru rw ...passed 00:10:31.728 Test: blockdev nvme passthru vendor specific ...[2024-11-25 10:15:38.676784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:31.728 [2024-11-25 10:15:38.676945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:31.728 passed 00:10:31.728 Test: blockdev nvme admin passthru ...passed 00:10:31.728 Test: blockdev copy ...passed 00:10:31.728 00:10:31.728 Run Summary: Type Total Ran Passed Failed Inactive 00:10:31.728 suites 6 6 n/a 0 0 00:10:31.728 tests 138 138 138 0 0 00:10:31.728 asserts 893 893 893 0 n/a 00:10:31.728 00:10:31.728 Elapsed time = 1.505 seconds 00:10:31.728 0 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61041 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61041 ']' 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61041 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61041 00:10:31.728 killing process with pid 61041 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61041' 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61041 00:10:31.728 10:15:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61041 00:10:32.711 ************************************ 00:10:32.711 END TEST bdev_bounds 00:10:32.711 ************************************ 00:10:32.711 10:15:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:32.711 00:10:32.711 real 0m2.854s 00:10:32.711 user 0m7.253s 00:10:32.711 sys 0m0.409s 00:10:32.711 10:15:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.711 10:15:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:32.970 10:15:39 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:32.970 10:15:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:32.970 10:15:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.970 10:15:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:32.970 ************************************ 00:10:32.970 START TEST bdev_nbd 00:10:32.970 ************************************ 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61101 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61101 /var/tmp/spdk-nbd.sock 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61101 ']' 00:10:32.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.970 10:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:32.970 [2024-11-25 10:15:39.998942] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:32.970 [2024-11-25 10:15:39.999096] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.228 [2024-11-25 10:15:40.191016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.228 [2024-11-25 10:15:40.307835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.163 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.163 1+0 records in 00:10:34.163 1+0 records out 00:10:34.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598609 s, 6.8 MB/s 00:10:34.421 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.421 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.421 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.421 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.421 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.421 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:34.421 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:34.421 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.680 1+0 records in 00:10:34.680 1+0 records out 00:10:34.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492186 s, 8.3 MB/s 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:34.680 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.939 1+0 records in 00:10:34.939 1+0 records out 00:10:34.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727012 s, 5.6 MB/s 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:34.939 10:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.197 1+0 records in 00:10:35.197 1+0 records out 00:10:35.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000800349 s, 5.1 MB/s 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:35.197 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.456 1+0 records in 00:10:35.456 1+0 records out 00:10:35.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778464 s, 5.3 MB/s 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:35.456 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.714 1+0 records in 00:10:35.714 1+0 records out 00:10:35.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813722 s, 5.0 MB/s 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:35.714 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:35.972 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd0", 00:10:35.973 "bdev_name": "Nvme0n1" 00:10:35.973 }, 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd1", 00:10:35.973 "bdev_name": "Nvme1n1" 00:10:35.973 }, 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd2", 00:10:35.973 "bdev_name": "Nvme2n1" 00:10:35.973 }, 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd3", 00:10:35.973 "bdev_name": "Nvme2n2" 00:10:35.973 }, 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd4", 00:10:35.973 "bdev_name": "Nvme2n3" 00:10:35.973 }, 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd5", 00:10:35.973 "bdev_name": "Nvme3n1" 00:10:35.973 } 00:10:35.973 ]' 00:10:35.973 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:35.973 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd0", 00:10:35.973 "bdev_name": "Nvme0n1" 00:10:35.973 }, 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd1", 00:10:35.973 "bdev_name": "Nvme1n1" 00:10:35.973 }, 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd2", 00:10:35.973 "bdev_name": "Nvme2n1" 00:10:35.973 }, 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd3", 00:10:35.973 "bdev_name": "Nvme2n2" 00:10:35.973 }, 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd4", 00:10:35.973 "bdev_name": "Nvme2n3" 00:10:35.973 }, 00:10:35.973 { 00:10:35.973 "nbd_device": "/dev/nbd5", 00:10:35.973 "bdev_name": "Nvme3n1" 00:10:35.973 } 00:10:35.973 ]' 00:10:35.973 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:35.973 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:35.973 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.973 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:35.973 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:35.973 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:35.973 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.973 10:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:36.231 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:36.231 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:36.231 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:36.231 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.231 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.231 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:36.231 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:36.231 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.231 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.231 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:36.489 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:36.489 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:36.489 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:36.489 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.489 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.489 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:36.489 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:36.489 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.489 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.489 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:36.748 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:36.748 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:36.748 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:36.748 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.748 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.748 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:36.748 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:36.748 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.748 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.748 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:37.006 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:37.007 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:37.007 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:37.007 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.007 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.007 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:37.007 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:37.007 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.007 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.007 10:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:37.265 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:37.265 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:37.265 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:37.265 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.265 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.265 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:37.265 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:37.265 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.265 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.265 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.523 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:37.782 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:37.783 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:38.041 /dev/nbd0 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.041 1+0 records in 00:10:38.041 1+0 records out 00:10:38.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000947869 s, 4.3 MB/s 00:10:38.041 10:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.041 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.041 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.041 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.041 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.041 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.041 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:38.041 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:38.300 /dev/nbd1 00:10:38.300 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:38.300 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:38.300 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:38.300 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.300 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.300 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.300 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:38.300 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.301 1+0 records in 00:10:38.301 1+0 records out 00:10:38.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647333 s, 6.3 MB/s 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:38.301 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:38.560 /dev/nbd10 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.560 1+0 records in 00:10:38.560 1+0 records out 00:10:38.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489772 s, 8.4 MB/s 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:38.560 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:38.819 /dev/nbd11 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.819 1+0 records in 00:10:38.819 1+0 records out 00:10:38.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630265 s, 6.5 MB/s 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:38.819 10:15:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:39.077 /dev/nbd12 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:39.077 1+0 records in 00:10:39.077 1+0 records out 00:10:39.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493957 s, 8.3 MB/s 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:39.077 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:39.336 /dev/nbd13 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:39.336 1+0 records in 00:10:39.336 1+0 records out 00:10:39.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010245 s, 4.0 MB/s 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.336 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:39.595 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd0", 00:10:39.595 "bdev_name": "Nvme0n1" 00:10:39.595 }, 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd1", 00:10:39.595 "bdev_name": "Nvme1n1" 00:10:39.595 }, 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd10", 00:10:39.595 "bdev_name": "Nvme2n1" 00:10:39.595 }, 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd11", 00:10:39.595 "bdev_name": "Nvme2n2" 00:10:39.595 }, 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd12", 00:10:39.595 "bdev_name": "Nvme2n3" 00:10:39.595 }, 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd13", 00:10:39.595 "bdev_name": "Nvme3n1" 00:10:39.595 } 00:10:39.595 ]' 00:10:39.595 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:39.595 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd0", 00:10:39.595 "bdev_name": "Nvme0n1" 00:10:39.595 }, 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd1", 00:10:39.595 "bdev_name": "Nvme1n1" 00:10:39.595 }, 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd10", 00:10:39.595 "bdev_name": "Nvme2n1" 00:10:39.595 }, 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd11", 00:10:39.595 "bdev_name": "Nvme2n2" 00:10:39.595 }, 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd12", 00:10:39.595 "bdev_name": "Nvme2n3" 00:10:39.595 }, 00:10:39.595 { 00:10:39.595 "nbd_device": "/dev/nbd13", 00:10:39.595 "bdev_name": "Nvme3n1" 00:10:39.595 } 00:10:39.595 ]' 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:39.853 /dev/nbd1 00:10:39.853 /dev/nbd10 00:10:39.853 /dev/nbd11 00:10:39.853 /dev/nbd12 00:10:39.853 /dev/nbd13' 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:39.853 /dev/nbd1 00:10:39.853 /dev/nbd10 00:10:39.853 /dev/nbd11 00:10:39.853 /dev/nbd12 00:10:39.853 /dev/nbd13' 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:39.853 256+0 records in 00:10:39.853 256+0 records out 00:10:39.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00571483 s, 183 MB/s 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:39.853 256+0 records in 00:10:39.853 256+0 records out 00:10:39.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112879 s, 9.3 MB/s 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.853 10:15:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:40.111 256+0 records in 00:10:40.111 256+0 records out 00:10:40.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126313 s, 8.3 MB/s 00:10:40.111 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:40.111 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:40.111 256+0 records in 00:10:40.111 256+0 records out 00:10:40.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142294 s, 7.4 MB/s 00:10:40.111 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:40.111 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:40.369 256+0 records in 00:10:40.369 256+0 records out 00:10:40.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127771 s, 8.2 MB/s 00:10:40.369 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:40.369 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:40.369 256+0 records in 00:10:40.369 256+0 records out 00:10:40.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121303 s, 8.6 MB/s 00:10:40.369 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:40.369 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:40.628 256+0 records in 00:10:40.628 256+0 records out 00:10:40.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124199 s, 8.4 MB/s 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.628 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:40.885 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:40.885 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:40.885 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:40.885 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.885 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.885 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:40.885 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.885 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.885 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.885 10:15:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:41.143 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:41.143 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:41.143 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:41.143 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.143 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.143 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:41.143 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.143 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.143 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.143 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:41.401 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:41.401 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:41.401 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:41.401 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.401 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.401 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:41.401 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.401 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.401 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.401 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:41.659 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:41.659 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:41.659 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:41.659 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.659 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.659 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:41.659 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.659 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.659 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.659 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:41.917 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:41.917 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:41.917 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:41.917 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.917 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.917 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:41.917 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.917 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.917 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.917 10:15:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.176 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:42.435 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:42.692 malloc_lvol_verify 00:10:42.692 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:42.949 77a2f83d-440d-419d-bbb3-3628603cdf4e 00:10:42.949 10:15:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:43.207 eb71807c-4ef3-4e9e-bc53-25b505871f7b 00:10:43.207 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:43.464 /dev/nbd0 00:10:43.464 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:43.464 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:43.464 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:43.464 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:43.464 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:43.464 mke2fs 1.47.0 (5-Feb-2023) 00:10:43.464 Discarding device blocks: 0/4096 done 00:10:43.464 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:43.464 00:10:43.464 Allocating group tables: 0/1 done 00:10:43.464 Writing inode tables: 0/1 done 00:10:43.464 Creating journal (1024 blocks): done 00:10:43.464 Writing superblocks and filesystem accounting information: 0/1 done 00:10:43.464 00:10:43.465 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:43.465 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.465 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:43.465 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:43.465 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:43.465 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.465 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61101 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61101 ']' 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61101 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61101 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61101' 00:10:43.722 killing process with pid 61101 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61101 00:10:43.722 10:15:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61101 00:10:45.094 10:15:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:45.094 00:10:45.094 real 0m11.974s 00:10:45.094 user 0m15.826s 00:10:45.094 sys 0m4.845s 00:10:45.094 10:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.094 ************************************ 00:10:45.094 END TEST bdev_nbd 00:10:45.094 10:15:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:45.094 ************************************ 00:10:45.094 10:15:51 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:45.094 10:15:51 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:10:45.094 skipping fio tests on NVMe due to multi-ns failures. 00:10:45.094 10:15:51 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:45.094 10:15:51 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:45.094 10:15:51 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:45.094 10:15:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:45.094 10:15:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.094 10:15:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:45.094 ************************************ 00:10:45.094 START TEST bdev_verify 00:10:45.094 ************************************ 00:10:45.094 10:15:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:45.094 [2024-11-25 10:15:52.023840] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:45.094 [2024-11-25 10:15:52.023990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61501 ] 00:10:45.353 [2024-11-25 10:15:52.206809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:45.353 [2024-11-25 10:15:52.325566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.353 [2024-11-25 10:15:52.325606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.285 Running I/O for 5 seconds... 00:10:48.166 17664.00 IOPS, 69.00 MiB/s [2024-11-25T10:15:56.652Z] 19552.00 IOPS, 76.38 MiB/s [2024-11-25T10:15:57.588Z] 19733.33 IOPS, 77.08 MiB/s [2024-11-25T10:15:58.524Z] 20192.00 IOPS, 78.88 MiB/s [2024-11-25T10:15:58.524Z] 20454.40 IOPS, 79.90 MiB/s 00:10:51.412 Latency(us) 00:10:51.412 [2024-11-25T10:15:58.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.412 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0x0 length 0xbd0bd 00:10:51.412 Nvme0n1 : 5.09 1686.34 6.59 0.00 0.00 75743.76 13580.95 81275.17 00:10:51.412 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:51.412 Nvme0n1 : 5.05 1674.21 6.54 0.00 0.00 76189.34 16107.64 88013.01 00:10:51.412 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0x0 length 0xa0000 00:10:51.412 Nvme1n1 : 5.09 1685.89 6.59 0.00 0.00 75633.07 11054.27 74958.44 00:10:51.412 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0xa0000 length 0xa0000 00:10:51.412 Nvme1n1 : 5.05 1673.78 6.54 0.00 0.00 76070.38 17792.10 80011.82 00:10:51.412 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0x0 length 0x80000 00:10:51.412 Nvme2n1 : 5.09 1684.68 6.58 0.00 0.00 75435.28 13423.04 76221.79 00:10:51.412 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0x80000 length 0x80000 00:10:51.412 Nvme2n1 : 5.07 1678.41 6.56 0.00 0.00 75596.17 7264.23 78327.36 00:10:51.412 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0x0 length 0x80000 00:10:51.412 Nvme2n2 : 5.09 1684.32 6.58 0.00 0.00 75329.38 12738.72 76642.90 00:10:51.412 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0x80000 length 0x80000 00:10:51.412 Nvme2n2 : 5.09 1685.95 6.59 0.00 0.00 75253.83 12475.53 77064.02 00:10:51.412 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0x0 length 0x80000 00:10:51.412 Nvme2n3 : 5.09 1683.90 6.58 0.00 0.00 75209.31 12475.53 76642.90 00:10:51.412 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0x80000 length 0x80000 00:10:51.412 Nvme2n3 : 5.09 1685.26 6.58 0.00 0.00 75132.80 13107.20 76221.79 00:10:51.412 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0x0 length 0x20000 00:10:51.412 Nvme3n1 : 5.09 1683.46 6.58 0.00 0.00 75093.85 12686.09 77064.02 00:10:51.412 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:51.412 Verification LBA range: start 0x20000 length 0x20000 00:10:51.412 Nvme3n1 : 5.09 1684.90 6.58 0.00 0.00 75010.52 12001.77 77485.13 00:10:51.412 [2024-11-25T10:15:58.524Z] =================================================================================================================== 00:10:51.412 [2024-11-25T10:15:58.524Z] Total : 20191.10 78.87 0.00 0.00 75473.10 7264.23 88013.01 00:10:52.789 00:10:52.789 real 0m7.702s 00:10:52.789 user 0m14.224s 00:10:52.789 sys 0m0.316s 00:10:52.789 10:15:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.789 10:15:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:52.789 ************************************ 00:10:52.789 END TEST bdev_verify 00:10:52.789 ************************************ 00:10:52.789 10:15:59 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:52.789 10:15:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:52.789 10:15:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.789 10:15:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.789 ************************************ 00:10:52.789 START TEST bdev_verify_big_io 00:10:52.789 ************************************ 00:10:52.789 10:15:59 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:52.789 [2024-11-25 10:15:59.796619] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:10:52.789 [2024-11-25 10:15:59.796771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61599 ] 00:10:53.081 [2024-11-25 10:15:59.985679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:53.081 [2024-11-25 10:16:00.108017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.081 [2024-11-25 10:16:00.108050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.030 Running I/O for 5 seconds... 00:10:58.459 1746.00 IOPS, 109.12 MiB/s [2024-11-25T10:16:06.947Z] 2997.00 IOPS, 187.31 MiB/s [2024-11-25T10:16:06.947Z] 3746.67 IOPS, 234.17 MiB/s 00:10:59.835 Latency(us) 00:10:59.835 [2024-11-25T10:16:06.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.835 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.835 Verification LBA range: start 0x0 length 0xbd0b 00:10:59.835 Nvme0n1 : 5.52 159.48 9.97 0.00 0.00 779645.23 27583.02 852336.48 00:10:59.835 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.835 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:59.835 Nvme0n1 : 5.42 165.29 10.33 0.00 0.00 752090.99 19792.40 761375.67 00:10:59.835 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.835 Verification LBA range: start 0x0 length 0xa000 00:10:59.835 Nvme1n1 : 5.52 152.72 9.55 0.00 0.00 795088.33 60219.42 1280189.17 00:10:59.835 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.835 Verification LBA range: start 0xa000 length 0xa000 00:10:59.835 Nvme1n1 : 5.53 165.93 10.37 0.00 0.00 726571.07 66957.26 697366.21 00:10:59.835 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.835 Verification LBA range: start 0x0 length 0x8000 00:10:59.835 Nvme2n1 : 5.59 157.33 9.83 0.00 0.00 748707.29 70747.30 1098267.55 00:10:59.835 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.835 Verification LBA range: start 0x8000 length 0x8000 00:10:59.835 Nvme2n1 : 5.58 172.08 10.76 0.00 0.00 693223.93 43374.83 707472.96 00:10:59.835 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.835 Verification LBA range: start 0x0 length 0x8000 00:10:59.835 Nvme2n2 : 5.70 172.76 10.80 0.00 0.00 666366.76 45690.96 751268.91 00:10:59.835 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.836 Verification LBA range: start 0x8000 length 0x8000 00:10:59.836 Nvme2n2 : 5.65 177.83 11.11 0.00 0.00 656902.48 19371.28 714210.80 00:10:59.836 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.836 Verification LBA range: start 0x0 length 0x8000 00:10:59.836 Nvme2n3 : 5.71 169.40 10.59 0.00 0.00 663434.32 10422.59 1374518.90 00:10:59.836 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.836 Verification LBA range: start 0x8000 length 0x8000 00:10:59.836 Nvme2n3 : 5.65 181.30 11.33 0.00 0.00 630961.56 43374.83 727686.48 00:10:59.836 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.836 Verification LBA range: start 0x0 length 0x2000 00:10:59.836 Nvme3n1 : 5.76 205.68 12.85 0.00 0.00 537320.23 1394.94 1010675.66 00:10:59.836 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.836 Verification LBA range: start 0x2000 length 0x2000 00:10:59.836 Nvme3n1 : 5.70 198.36 12.40 0.00 0.00 565275.85 1039.63 724317.56 00:10:59.836 [2024-11-25T10:16:06.948Z] =================================================================================================================== 00:10:59.836 [2024-11-25T10:16:06.948Z] Total : 2078.16 129.89 0.00 0.00 676888.45 1039.63 1374518.90 00:11:02.421 00:11:02.421 real 0m9.569s 00:11:02.421 user 0m17.908s 00:11:02.421 sys 0m0.362s 00:11:02.421 10:16:09 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.421 10:16:09 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.421 ************************************ 00:11:02.421 END TEST bdev_verify_big_io 00:11:02.421 ************************************ 00:11:02.421 10:16:09 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:02.421 10:16:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:02.421 10:16:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.421 10:16:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:02.421 ************************************ 00:11:02.421 START TEST bdev_write_zeroes 00:11:02.421 ************************************ 00:11:02.421 10:16:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:02.421 [2024-11-25 10:16:09.428994] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:11:02.421 [2024-11-25 10:16:09.429128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61714 ] 00:11:02.679 [2024-11-25 10:16:09.597810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.679 [2024-11-25 10:16:09.714461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.612 Running I/O for 1 seconds... 00:11:04.549 73728.00 IOPS, 288.00 MiB/s 00:11:04.549 Latency(us) 00:11:04.549 [2024-11-25T10:16:11.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.549 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:04.549 Nvme0n1 : 1.02 12244.07 47.83 0.00 0.00 10434.05 8580.22 26530.24 00:11:04.549 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:04.549 Nvme1n1 : 1.02 12232.73 47.78 0.00 0.00 10432.88 8738.13 27372.47 00:11:04.549 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:04.549 Nvme2n1 : 1.02 12221.66 47.74 0.00 0.00 10421.86 8527.58 26530.24 00:11:04.549 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:04.549 Nvme2n2 : 1.02 12210.41 47.70 0.00 0.00 10415.38 8474.94 26214.40 00:11:04.549 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:04.549 Nvme2n3 : 1.02 12198.78 47.65 0.00 0.00 10374.21 8422.30 22950.76 00:11:04.549 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:04.549 Nvme3n1 : 1.02 12187.76 47.61 0.00 0.00 10333.36 7106.31 19687.12 00:11:04.549 [2024-11-25T10:16:11.661Z] =================================================================================================================== 00:11:04.549 [2024-11-25T10:16:11.661Z] Total : 73295.41 286.31 0.00 0.00 10401.96 7106.31 27372.47 00:11:05.491 00:11:05.491 real 0m3.211s 00:11:05.491 user 0m2.842s 00:11:05.491 sys 0m0.252s 00:11:05.491 10:16:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.491 10:16:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:05.491 ************************************ 00:11:05.491 END TEST bdev_write_zeroes 00:11:05.491 ************************************ 00:11:05.491 10:16:12 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:05.491 10:16:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:05.766 10:16:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.766 10:16:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:05.766 ************************************ 00:11:05.766 START TEST bdev_json_nonenclosed 00:11:05.766 ************************************ 00:11:05.766 10:16:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:05.766 [2024-11-25 10:16:12.710536] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:11:05.766 [2024-11-25 10:16:12.710669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61767 ] 00:11:06.047 [2024-11-25 10:16:12.894721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.047 [2024-11-25 10:16:13.043008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.047 [2024-11-25 10:16:13.043116] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:06.047 [2024-11-25 10:16:13.043139] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:06.047 [2024-11-25 10:16:13.043151] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:06.306 00:11:06.306 real 0m0.700s 00:11:06.306 user 0m0.444s 00:11:06.306 sys 0m0.150s 00:11:06.306 10:16:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.306 10:16:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:06.306 ************************************ 00:11:06.306 END TEST bdev_json_nonenclosed 00:11:06.306 ************************************ 00:11:06.306 10:16:13 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:06.306 10:16:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:06.306 10:16:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.306 10:16:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:06.306 ************************************ 00:11:06.306 START TEST bdev_json_nonarray 00:11:06.306 ************************************ 00:11:06.307 10:16:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:06.565 [2024-11-25 10:16:13.478667] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:11:06.565 [2024-11-25 10:16:13.478793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61793 ] 00:11:06.565 [2024-11-25 10:16:13.663463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.824 [2024-11-25 10:16:13.791112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.824 [2024-11-25 10:16:13.791238] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:06.824 [2024-11-25 10:16:13.791275] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:06.824 [2024-11-25 10:16:13.791295] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:07.110 00:11:07.110 real 0m0.672s 00:11:07.110 user 0m0.416s 00:11:07.110 sys 0m0.150s 00:11:07.110 10:16:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.110 10:16:14 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:07.110 ************************************ 00:11:07.110 END TEST bdev_json_nonarray 00:11:07.110 ************************************ 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:11:07.110 10:16:14 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:11:07.110 00:11:07.110 real 0m44.040s 00:11:07.110 user 1m5.437s 00:11:07.110 sys 0m7.947s 00:11:07.110 10:16:14 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.110 10:16:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:07.110 ************************************ 00:11:07.110 END TEST blockdev_nvme 00:11:07.110 ************************************ 00:11:07.110 10:16:14 -- spdk/autotest.sh@209 -- # uname -s 00:11:07.110 10:16:14 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:11:07.110 10:16:14 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:07.110 10:16:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:07.110 10:16:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.110 10:16:14 -- common/autotest_common.sh@10 -- # set +x 00:11:07.110 ************************************ 00:11:07.110 START TEST blockdev_nvme_gpt 00:11:07.110 ************************************ 00:11:07.110 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:07.370 * Looking for test storage... 00:11:07.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:07.370 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:07.370 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:11:07.370 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:07.370 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.370 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.371 10:16:14 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:07.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.371 --rc genhtml_branch_coverage=1 00:11:07.371 --rc genhtml_function_coverage=1 00:11:07.371 --rc genhtml_legend=1 00:11:07.371 --rc geninfo_all_blocks=1 00:11:07.371 --rc geninfo_unexecuted_blocks=1 00:11:07.371 00:11:07.371 ' 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:07.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.371 --rc genhtml_branch_coverage=1 00:11:07.371 --rc genhtml_function_coverage=1 00:11:07.371 --rc genhtml_legend=1 00:11:07.371 --rc geninfo_all_blocks=1 00:11:07.371 --rc geninfo_unexecuted_blocks=1 00:11:07.371 00:11:07.371 ' 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:07.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.371 --rc genhtml_branch_coverage=1 00:11:07.371 --rc genhtml_function_coverage=1 00:11:07.371 --rc genhtml_legend=1 00:11:07.371 --rc geninfo_all_blocks=1 00:11:07.371 --rc geninfo_unexecuted_blocks=1 00:11:07.371 00:11:07.371 ' 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:07.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.371 --rc genhtml_branch_coverage=1 00:11:07.371 --rc genhtml_function_coverage=1 00:11:07.371 --rc genhtml_legend=1 00:11:07.371 --rc geninfo_all_blocks=1 00:11:07.371 --rc geninfo_unexecuted_blocks=1 00:11:07.371 00:11:07.371 ' 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61877 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:07.371 10:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61877 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61877 ']' 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.371 10:16:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:07.631 [2024-11-25 10:16:14.560209] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:11:07.631 [2024-11-25 10:16:14.560331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61877 ] 00:11:07.889 [2024-11-25 10:16:14.742713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.889 [2024-11-25 10:16:14.858150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.827 10:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.827 10:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:11:08.827 10:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:08.827 10:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:11:08.827 10:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:09.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:09.396 Waiting for block devices as requested 00:11:09.656 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.656 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.656 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.915 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:15.188 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:15.188 10:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:11:15.188 10:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:11:15.188 10:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:11:15.188 10:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:11:15.188 10:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:11:15.188 10:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:11:15.188 10:16:21 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:11:15.188 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:11:15.188 BYT; 00:11:15.188 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:11:15.189 BYT; 00:11:15.189 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:15.189 10:16:22 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:15.189 10:16:22 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:11:16.149 The operation has completed successfully. 00:11:16.149 10:16:23 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:11:17.084 The operation has completed successfully. 00:11:17.084 10:16:24 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:18.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:18.614 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.614 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.614 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.614 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:18.873 10:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:11:18.873 10:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.873 10:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:18.873 [] 00:11:18.873 10:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.873 10:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:11:18.873 10:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:11:18.873 10:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:18.873 10:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:18.873 10:16:25 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:18.873 10:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.873 10:16:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.130 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.130 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:11:19.130 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.130 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.130 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.130 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:19.388 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.388 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:19.388 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:19.388 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:19.388 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.388 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:19.388 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.388 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:19.388 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:19.389 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "93dd0641-e92c-4f0f-965f-fdf42ad7eb7e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "93dd0641-e92c-4f0f-965f-fdf42ad7eb7e",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "545a5d16-cb92-45e3-af10-bf3e18e9a4d5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "545a5d16-cb92-45e3-af10-bf3e18e9a4d5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "17f29a33-17a5-410e-9f1a-5c9a3fdd0294"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "17f29a33-17a5-410e-9f1a-5c9a3fdd0294",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "60ae8312-cce0-4843-98b3-6d996afce25a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "60ae8312-cce0-4843-98b3-6d996afce25a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "72d7793f-733b-4f45-9331-716ae7972552"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "72d7793f-733b-4f45-9331-716ae7972552",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:19.389 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:19.389 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:11:19.389 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:19.389 10:16:26 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61877 00:11:19.389 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61877 ']' 00:11:19.389 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61877 00:11:19.389 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:11:19.389 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.389 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61877 00:11:19.389 killing process with pid 61877 00:11:19.389 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.389 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.389 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61877' 00:11:19.389 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61877 00:11:19.389 10:16:26 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61877 00:11:21.917 10:16:28 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:21.917 10:16:28 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:21.917 10:16:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:21.917 10:16:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.917 10:16:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:21.917 ************************************ 00:11:21.917 START TEST bdev_hello_world 00:11:21.917 ************************************ 00:11:21.917 10:16:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:21.917 [2024-11-25 10:16:28.953322] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:11:21.917 [2024-11-25 10:16:28.953445] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62526 ] 00:11:22.176 [2024-11-25 10:16:29.134418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.176 [2024-11-25 10:16:29.267012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.114 [2024-11-25 10:16:29.950173] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:23.114 [2024-11-25 10:16:29.950223] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:23.114 [2024-11-25 10:16:29.950248] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:23.114 [2024-11-25 10:16:29.953270] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:23.114 [2024-11-25 10:16:29.953970] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:23.114 [2024-11-25 10:16:29.954010] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:23.114 [2024-11-25 10:16:29.954202] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:23.114 00:11:23.114 [2024-11-25 10:16:29.954225] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:24.050 00:11:24.050 real 0m2.213s 00:11:24.050 user 0m1.839s 00:11:24.050 sys 0m0.260s 00:11:24.050 ************************************ 00:11:24.050 END TEST bdev_hello_world 00:11:24.050 ************************************ 00:11:24.050 10:16:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.050 10:16:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:24.050 10:16:31 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:24.050 10:16:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.050 10:16:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.050 10:16:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:24.050 ************************************ 00:11:24.050 START TEST bdev_bounds 00:11:24.050 ************************************ 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:11:24.051 Process bdevio pid: 62568 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62568 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62568' 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62568 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62568 ']' 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.051 10:16:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:24.310 [2024-11-25 10:16:31.238227] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:11:24.310 [2024-11-25 10:16:31.238553] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62568 ] 00:11:24.569 [2024-11-25 10:16:31.420823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:24.569 [2024-11-25 10:16:31.540131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.569 [2024-11-25 10:16:31.540180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.569 [2024-11-25 10:16:31.540216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.136 10:16:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.136 10:16:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:11:25.136 10:16:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:25.395 I/O targets: 00:11:25.395 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:25.395 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:11:25.395 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:11:25.395 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:25.395 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:25.395 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:25.395 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:25.395 00:11:25.395 00:11:25.395 CUnit - A unit testing framework for C - Version 2.1-3 00:11:25.395 http://cunit.sourceforge.net/ 00:11:25.395 00:11:25.395 00:11:25.395 Suite: bdevio tests on: Nvme3n1 00:11:25.395 Test: blockdev write read block ...passed 00:11:25.395 Test: blockdev write zeroes read block ...passed 00:11:25.395 Test: blockdev write zeroes read no split ...passed 00:11:25.395 Test: blockdev write zeroes read split ...passed 00:11:25.395 Test: blockdev write zeroes read split partial ...passed 00:11:25.395 Test: blockdev reset ...[2024-11-25 10:16:32.387822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:25.395 [2024-11-25 10:16:32.391717] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:11:25.395 Test: blockdev write read 8 blocks ...uccessful. 00:11:25.395 passed 00:11:25.395 Test: blockdev write read size > 128k ...passed 00:11:25.395 Test: blockdev write read invalid size ...passed 00:11:25.395 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.395 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.395 Test: blockdev write read max offset ...passed 00:11:25.395 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.395 Test: blockdev writev readv 8 blocks ...passed 00:11:25.395 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.395 Test: blockdev writev readv block ...passed 00:11:25.395 Test: blockdev writev readv size > 128k ...passed 00:11:25.395 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.395 Test: blockdev comparev and writev ...[2024-11-25 10:16:32.400812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6e04000 len:0x1000 00:11:25.395 [2024-11-25 10:16:32.400862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:25.395 passed 00:11:25.395 Test: blockdev nvme passthru rw ...passed 00:11:25.395 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.395 Test: blockdev nvme admin passthru ...[2024-11-25 10:16:32.401737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:25.395 [2024-11-25 10:16:32.401781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:25.395 passed 00:11:25.395 Test: blockdev copy ...passed 00:11:25.395 Suite: bdevio tests on: Nvme2n3 00:11:25.395 Test: blockdev write read block ...passed 00:11:25.395 Test: blockdev write zeroes read block ...passed 00:11:25.395 Test: blockdev write zeroes read no split ...passed 00:11:25.395 Test: blockdev write zeroes read split ...passed 00:11:25.395 Test: blockdev write zeroes read split partial ...passed 00:11:25.395 Test: blockdev reset ...[2024-11-25 10:16:32.476968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:25.395 [2024-11-25 10:16:32.481252] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:25.395 passed 00:11:25.395 Test: blockdev write read 8 blocks ...passed 00:11:25.395 Test: blockdev write read size > 128k ...passed 00:11:25.395 Test: blockdev write read invalid size ...passed 00:11:25.395 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.395 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.395 Test: blockdev write read max offset ...passed 00:11:25.395 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.395 Test: blockdev writev readv 8 blocks ...passed 00:11:25.395 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.395 Test: blockdev writev readv block ...passed 00:11:25.395 Test: blockdev writev readv size > 128k ...passed 00:11:25.395 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.395 Test: blockdev comparev and writev ...[2024-11-25 10:16:32.490682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6e02000 len:0x1000 00:11:25.395 [2024-11-25 10:16:32.490749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:25.395 passed 00:11:25.395 Test: blockdev nvme passthru rw ...passed 00:11:25.395 Test: blockdev nvme passthru vendor specific ...[2024-11-25 10:16:32.491767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:11:25.395 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:11:25.395 [2024-11-25 10:16:32.491916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:25.395 passed 00:11:25.395 Test: blockdev copy ...passed 00:11:25.395 Suite: bdevio tests on: Nvme2n2 00:11:25.395 Test: blockdev write read block ...passed 00:11:25.395 Test: blockdev write zeroes read block ...passed 00:11:25.654 Test: blockdev write zeroes read no split ...passed 00:11:25.654 Test: blockdev write zeroes read split ...passed 00:11:25.654 Test: blockdev write zeroes read split partial ...passed 00:11:25.654 Test: blockdev reset ...[2024-11-25 10:16:32.569409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:25.654 [2024-11-25 10:16:32.573718] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:11:25.654 Test: blockdev write read 8 blocks ...uccessful. 00:11:25.654 passed 00:11:25.654 Test: blockdev write read size > 128k ...passed 00:11:25.654 Test: blockdev write read invalid size ...passed 00:11:25.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.654 Test: blockdev write read max offset ...passed 00:11:25.654 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.654 Test: blockdev writev readv 8 blocks ...passed 00:11:25.654 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.654 Test: blockdev writev readv block ...passed 00:11:25.654 Test: blockdev writev readv size > 128k ...passed 00:11:25.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.654 Test: blockdev comparev and writev ...[2024-11-25 10:16:32.583628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9438000 len:0x1000 00:11:25.654 [2024-11-25 10:16:32.583807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:25.654 passed 00:11:25.654 Test: blockdev nvme passthru rw ...passed 00:11:25.654 Test: blockdev nvme passthru vendor specific ...[2024-11-25 10:16:32.585006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:25.654 [2024-11-25 10:16:32.585138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:11:25.654 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:11:25.654 passed 00:11:25.654 Test: blockdev copy ...passed 00:11:25.654 Suite: bdevio tests on: Nvme2n1 00:11:25.654 Test: blockdev write read block ...passed 00:11:25.654 Test: blockdev write zeroes read block ...passed 00:11:25.654 Test: blockdev write zeroes read no split ...passed 00:11:25.654 Test: blockdev write zeroes read split ...passed 00:11:25.654 Test: blockdev write zeroes read split partial ...passed 00:11:25.654 Test: blockdev reset ...[2024-11-25 10:16:32.664771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:25.654 [2024-11-25 10:16:32.669216] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:11:25.654 00:11:25.654 Test: blockdev write read 8 blocks ...passed 00:11:25.654 Test: blockdev write read size > 128k ...passed 00:11:25.654 Test: blockdev write read invalid size ...passed 00:11:25.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.654 Test: blockdev write read max offset ...passed 00:11:25.654 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.654 Test: blockdev writev readv 8 blocks ...passed 00:11:25.654 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.654 Test: blockdev writev readv block ...passed 00:11:25.654 Test: blockdev writev readv size > 128k ...passed 00:11:25.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.654 Test: blockdev comparev and writev ...[2024-11-25 10:16:32.678653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9434000 len:0x1000 00:11:25.654 [2024-11-25 10:16:32.678705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:25.654 passed 00:11:25.654 Test: blockdev nvme passthru rw ...passed 00:11:25.654 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.654 Test: blockdev nvme admin passthru ...[2024-11-25 10:16:32.679754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:25.654 [2024-11-25 10:16:32.679789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:25.654 passed 00:11:25.654 Test: blockdev copy ...passed 00:11:25.654 Suite: bdevio tests on: Nvme1n1p2 00:11:25.654 Test: blockdev write read block ...passed 00:11:25.654 Test: blockdev write zeroes read block ...passed 00:11:25.654 Test: blockdev write zeroes read no split ...passed 00:11:25.654 Test: blockdev write zeroes read split ...passed 00:11:25.654 Test: blockdev write zeroes read split partial ...passed 00:11:25.654 Test: blockdev reset ...[2024-11-25 10:16:32.760456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:25.914 passed 00:11:25.914 Test: blockdev write read 8 blocks ...[2024-11-25 10:16:32.764209] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:25.914 passed 00:11:25.914 Test: blockdev write read size > 128k ...passed 00:11:25.914 Test: blockdev write read invalid size ...passed 00:11:25.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.914 Test: blockdev write read max offset ...passed 00:11:25.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.914 Test: blockdev writev readv 8 blocks ...passed 00:11:25.914 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.914 Test: blockdev writev readv block ...passed 00:11:25.914 Test: blockdev writev readv size > 128k ...passed 00:11:25.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.914 Test: blockdev comparev and writev ...[2024-11-25 10:16:32.773462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c9430000 len:0x1000 00:11:25.914 [2024-11-25 10:16:32.773525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:25.914 passed 00:11:25.914 Test: blockdev nvme passthru rw ...passed 00:11:25.914 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.914 Test: blockdev nvme admin passthru ...passed 00:11:25.914 Test: blockdev copy ...passed 00:11:25.914 Suite: bdevio tests on: Nvme1n1p1 00:11:25.914 Test: blockdev write read block ...passed 00:11:25.914 Test: blockdev write zeroes read block ...passed 00:11:25.914 Test: blockdev write zeroes read no split ...passed 00:11:25.914 Test: blockdev write zeroes read split ...passed 00:11:25.914 Test: blockdev write zeroes read split partial ...passed 00:11:25.914 Test: blockdev reset ...[2024-11-25 10:16:32.841473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:25.914 [2024-11-25 10:16:32.845490] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:11:25.914 Test: blockdev write read 8 blocks ...uccessful. 00:11:25.914 passed 00:11:25.914 Test: blockdev write read size > 128k ...passed 00:11:25.914 Test: blockdev write read invalid size ...passed 00:11:25.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.914 Test: blockdev write read max offset ...passed 00:11:25.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.914 Test: blockdev writev readv 8 blocks ...passed 00:11:25.914 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.914 Test: blockdev writev readv block ...passed 00:11:25.914 Test: blockdev writev readv size > 128k ...passed 00:11:25.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.914 Test: blockdev comparev and writev ...passed 00:11:25.914 Test: blockdev nvme passthru rw ...passed 00:11:25.914 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.914 Test: blockdev nvme admin passthru ...passed 00:11:25.914 Test: blockdev copy ...[2024-11-25 10:16:32.854910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b700e000 len:0x1000 00:11:25.914 [2024-11-25 10:16:32.854953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:25.914 passed 00:11:25.914 Suite: bdevio tests on: Nvme0n1 00:11:25.914 Test: blockdev write read block ...passed 00:11:25.914 Test: blockdev write zeroes read block ...passed 00:11:25.914 Test: blockdev write zeroes read no split ...passed 00:11:25.914 Test: blockdev write zeroes read split ...passed 00:11:25.914 Test: blockdev write zeroes read split partial ...passed 00:11:25.914 Test: blockdev reset ...[2024-11-25 10:16:32.928702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:25.914 [2024-11-25 10:16:32.932590] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:25.914 passed 00:11:25.914 Test: blockdev write read 8 blocks ...passed 00:11:25.914 Test: blockdev write read size > 128k ...passed 00:11:25.914 Test: blockdev write read invalid size ...passed 00:11:25.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.914 Test: blockdev write read max offset ...passed 00:11:25.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.914 Test: blockdev writev readv 8 blocks ...passed 00:11:25.914 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.914 Test: blockdev writev readv block ...passed 00:11:25.914 Test: blockdev writev readv size > 128k ...passed 00:11:25.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.914 Test: blockdev comparev and writev ...[2024-11-25 10:16:32.940285] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:25.914 separate metadata which is not supported yet. 00:11:25.914 passed 00:11:25.914 Test: blockdev nvme passthru rw ...passed 00:11:25.914 Test: blockdev nvme passthru vendor specific ...passed 00:11:25.914 Test: blockdev nvme admin passthru ...[2024-11-25 10:16:32.940939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:25.914 [2024-11-25 10:16:32.940984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:25.914 passed 00:11:25.914 Test: blockdev copy ...passed 00:11:25.914 00:11:25.914 Run Summary: Type Total Ran Passed Failed Inactive 00:11:25.914 suites 7 7 n/a 0 0 00:11:25.914 tests 161 161 161 0 0 00:11:25.914 asserts 1025 1025 1025 0 n/a 00:11:25.914 00:11:25.914 Elapsed time = 1.709 seconds 00:11:25.914 0 00:11:25.914 10:16:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62568 00:11:25.914 10:16:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62568 ']' 00:11:25.914 10:16:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62568 00:11:25.914 10:16:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:11:25.914 10:16:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.914 10:16:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62568 00:11:26.173 killing process with pid 62568 00:11:26.173 10:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.173 10:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.173 10:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62568' 00:11:26.173 10:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62568 00:11:26.173 10:16:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62568 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:27.107 00:11:27.107 real 0m2.916s 00:11:27.107 user 0m7.477s 00:11:27.107 sys 0m0.422s 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:27.107 ************************************ 00:11:27.107 END TEST bdev_bounds 00:11:27.107 ************************************ 00:11:27.107 10:16:34 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:27.107 10:16:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:27.107 10:16:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.107 10:16:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:27.107 ************************************ 00:11:27.107 START TEST bdev_nbd 00:11:27.107 ************************************ 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62633 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62633 /var/tmp/spdk-nbd.sock 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62633 ']' 00:11:27.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:27.107 10:16:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:27.365 [2024-11-25 10:16:34.236628] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:11:27.365 [2024-11-25 10:16:34.236789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.365 [2024-11-25 10:16:34.418879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.624 [2024-11-25 10:16:34.538077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:28.192 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.451 1+0 records in 00:11:28.451 1+0 records out 00:11:28.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720206 s, 5.7 MB/s 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:28.451 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.710 1+0 records in 00:11:28.710 1+0 records out 00:11:28.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425359 s, 9.6 MB/s 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:28.710 10:16:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:11:28.967 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.968 1+0 records in 00:11:28.968 1+0 records out 00:11:28.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494433 s, 8.3 MB/s 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:28.968 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.226 1+0 records in 00:11:29.226 1+0 records out 00:11:29.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661623 s, 6.2 MB/s 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:29.226 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.227 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:29.227 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:29.227 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:29.227 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:29.227 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.486 1+0 records in 00:11:29.486 1+0 records out 00:11:29.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00314578 s, 1.3 MB/s 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:29.486 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:29.745 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:29.745 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:29.745 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:29.745 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:29.745 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:29.745 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:29.745 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:29.745 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:29.745 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:29.745 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:29.746 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:29.746 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.746 1+0 records in 00:11:29.746 1+0 records out 00:11:29.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000949882 s, 4.3 MB/s 00:11:29.746 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.746 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:29.746 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.746 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:29.746 10:16:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:29.746 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:29.746 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:29.746 10:16:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.005 1+0 records in 00:11:30.005 1+0 records out 00:11:30.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805936 s, 5.1 MB/s 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:30.005 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd0", 00:11:30.264 "bdev_name": "Nvme0n1" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd1", 00:11:30.264 "bdev_name": "Nvme1n1p1" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd2", 00:11:30.264 "bdev_name": "Nvme1n1p2" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd3", 00:11:30.264 "bdev_name": "Nvme2n1" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd4", 00:11:30.264 "bdev_name": "Nvme2n2" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd5", 00:11:30.264 "bdev_name": "Nvme2n3" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd6", 00:11:30.264 "bdev_name": "Nvme3n1" 00:11:30.264 } 00:11:30.264 ]' 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd0", 00:11:30.264 "bdev_name": "Nvme0n1" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd1", 00:11:30.264 "bdev_name": "Nvme1n1p1" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd2", 00:11:30.264 "bdev_name": "Nvme1n1p2" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd3", 00:11:30.264 "bdev_name": "Nvme2n1" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd4", 00:11:30.264 "bdev_name": "Nvme2n2" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd5", 00:11:30.264 "bdev_name": "Nvme2n3" 00:11:30.264 }, 00:11:30.264 { 00:11:30.264 "nbd_device": "/dev/nbd6", 00:11:30.264 "bdev_name": "Nvme3n1" 00:11:30.264 } 00:11:30.264 ]' 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.264 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:30.523 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:30.523 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:30.524 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:30.524 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.524 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.524 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:30.524 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.524 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.524 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.524 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:30.783 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:30.783 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:30.783 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:30.783 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.783 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.783 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:30.783 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:30.783 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.783 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.783 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:31.042 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:31.042 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:31.042 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:31.042 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.042 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.042 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:31.042 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.042 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.042 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.042 10:16:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:31.302 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:31.302 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:31.302 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:31.302 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.302 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.302 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:31.302 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.302 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.302 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.302 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:31.561 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:31.561 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:31.561 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:31.561 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.561 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.561 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:31.561 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.561 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.561 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.561 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:31.821 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:31.821 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:31.821 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:31.821 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.821 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.821 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:31.821 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.821 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.821 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.821 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.079 10:16:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:32.339 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:32.340 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:32.599 /dev/nbd0 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.599 1+0 records in 00:11:32.599 1+0 records out 00:11:32.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663813 s, 6.2 MB/s 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:32.599 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:32.859 /dev/nbd1 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.859 1+0 records in 00:11:32.859 1+0 records out 00:11:32.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618199 s, 6.6 MB/s 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:32.859 10:16:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:33.119 /dev/nbd10 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.119 1+0 records in 00:11:33.119 1+0 records out 00:11:33.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000850848 s, 4.8 MB/s 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:33.119 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:33.378 /dev/nbd11 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.378 1+0 records in 00:11:33.378 1+0 records out 00:11:33.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000889623 s, 4.6 MB/s 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:33.378 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:33.638 /dev/nbd12 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.638 1+0 records in 00:11:33.638 1+0 records out 00:11:33.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811331 s, 5.0 MB/s 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:33.638 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:33.898 /dev/nbd13 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.898 1+0 records in 00:11:33.898 1+0 records out 00:11:33.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769705 s, 5.3 MB/s 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:33.898 10:16:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:34.157 /dev/nbd14 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:34.157 1+0 records in 00:11:34.157 1+0 records out 00:11:34.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000914291 s, 4.5 MB/s 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:34.157 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:34.158 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:34.158 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:34.158 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd0", 00:11:34.419 "bdev_name": "Nvme0n1" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd1", 00:11:34.419 "bdev_name": "Nvme1n1p1" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd10", 00:11:34.419 "bdev_name": "Nvme1n1p2" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd11", 00:11:34.419 "bdev_name": "Nvme2n1" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd12", 00:11:34.419 "bdev_name": "Nvme2n2" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd13", 00:11:34.419 "bdev_name": "Nvme2n3" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd14", 00:11:34.419 "bdev_name": "Nvme3n1" 00:11:34.419 } 00:11:34.419 ]' 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd0", 00:11:34.419 "bdev_name": "Nvme0n1" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd1", 00:11:34.419 "bdev_name": "Nvme1n1p1" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd10", 00:11:34.419 "bdev_name": "Nvme1n1p2" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd11", 00:11:34.419 "bdev_name": "Nvme2n1" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd12", 00:11:34.419 "bdev_name": "Nvme2n2" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd13", 00:11:34.419 "bdev_name": "Nvme2n3" 00:11:34.419 }, 00:11:34.419 { 00:11:34.419 "nbd_device": "/dev/nbd14", 00:11:34.419 "bdev_name": "Nvme3n1" 00:11:34.419 } 00:11:34.419 ]' 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:34.419 /dev/nbd1 00:11:34.419 /dev/nbd10 00:11:34.419 /dev/nbd11 00:11:34.419 /dev/nbd12 00:11:34.419 /dev/nbd13 00:11:34.419 /dev/nbd14' 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:34.419 /dev/nbd1 00:11:34.419 /dev/nbd10 00:11:34.419 /dev/nbd11 00:11:34.419 /dev/nbd12 00:11:34.419 /dev/nbd13 00:11:34.419 /dev/nbd14' 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:34.419 256+0 records in 00:11:34.419 256+0 records out 00:11:34.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123405 s, 85.0 MB/s 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.419 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:34.677 256+0 records in 00:11:34.677 256+0 records out 00:11:34.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133363 s, 7.9 MB/s 00:11:34.677 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.677 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:34.677 256+0 records in 00:11:34.677 256+0 records out 00:11:34.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140719 s, 7.5 MB/s 00:11:34.677 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.677 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:34.935 256+0 records in 00:11:34.935 256+0 records out 00:11:34.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14432 s, 7.3 MB/s 00:11:34.935 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.935 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:34.935 256+0 records in 00:11:34.935 256+0 records out 00:11:34.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144128 s, 7.3 MB/s 00:11:34.935 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:34.935 10:16:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:35.194 256+0 records in 00:11:35.194 256+0 records out 00:11:35.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137423 s, 7.6 MB/s 00:11:35.194 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:35.194 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:35.194 256+0 records in 00:11:35.194 256+0 records out 00:11:35.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139371 s, 7.5 MB/s 00:11:35.194 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:35.194 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:35.453 256+0 records in 00:11:35.453 256+0 records out 00:11:35.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138444 s, 7.6 MB/s 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.453 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:35.712 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:35.712 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:35.712 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:35.712 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.712 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.712 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:35.712 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.712 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.712 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.712 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:35.971 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:35.971 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:35.971 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:35.971 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.971 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.971 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:35.971 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.971 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.971 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.971 10:16:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:36.231 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:36.231 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:36.231 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:36.231 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.231 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.231 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:36.231 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.231 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.231 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.231 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:36.490 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:36.490 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:36.490 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:36.490 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.490 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.490 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:36.490 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.490 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.490 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.490 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:36.747 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:37.006 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:37.006 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:37.006 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.006 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.006 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:37.006 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.006 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.006 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.006 10:16:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.006 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:37.266 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:37.527 malloc_lvol_verify 00:11:37.527 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:37.786 b081a070-b069-4cad-a0b0-0622fbba64e2 00:11:37.786 10:16:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:38.045 a6469c49-d417-4862-ad94-bce02943e545 00:11:38.045 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:38.304 /dev/nbd0 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:38.304 mke2fs 1.47.0 (5-Feb-2023) 00:11:38.304 Discarding device blocks: 0/4096 done 00:11:38.304 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:38.304 00:11:38.304 Allocating group tables: 0/1 done 00:11:38.304 Writing inode tables: 0/1 done 00:11:38.304 Creating journal (1024 blocks): done 00:11:38.304 Writing superblocks and filesystem accounting information: 0/1 done 00:11:38.304 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.304 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62633 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62633 ']' 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62633 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62633 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.563 killing process with pid 62633 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62633' 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62633 00:11:38.563 10:16:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62633 00:11:39.939 10:16:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:39.939 00:11:39.939 real 0m12.676s 00:11:39.939 user 0m16.537s 00:11:39.939 sys 0m5.276s 00:11:39.939 ************************************ 00:11:39.939 END TEST bdev_nbd 00:11:39.939 ************************************ 00:11:39.939 10:16:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.939 10:16:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:39.939 10:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:39.939 10:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:11:39.939 10:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:11:39.939 skipping fio tests on NVMe due to multi-ns failures. 00:11:39.939 10:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:39.939 10:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:39.939 10:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:39.939 10:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:39.939 10:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.939 10:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:39.939 ************************************ 00:11:39.939 START TEST bdev_verify 00:11:39.939 ************************************ 00:11:39.939 10:16:46 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:39.939 [2024-11-25 10:16:46.967718] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:11:39.939 [2024-11-25 10:16:46.967843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63062 ] 00:11:40.197 [2024-11-25 10:16:47.149443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:40.197 [2024-11-25 10:16:47.271145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.197 [2024-11-25 10:16:47.271179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.133 Running I/O for 5 seconds... 00:11:43.447 20736.00 IOPS, 81.00 MiB/s [2024-11-25T10:16:51.495Z] 20960.00 IOPS, 81.88 MiB/s [2024-11-25T10:16:52.431Z] 21013.33 IOPS, 82.08 MiB/s [2024-11-25T10:16:53.380Z] 21232.00 IOPS, 82.94 MiB/s [2024-11-25T10:16:53.380Z] 21388.80 IOPS, 83.55 MiB/s 00:11:46.268 Latency(us) 00:11:46.268 [2024-11-25T10:16:53.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.268 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.268 Verification LBA range: start 0x0 length 0xbd0bd 00:11:46.268 Nvme0n1 : 5.05 1519.35 5.93 0.00 0.00 84026.38 18844.89 87591.89 00:11:46.268 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.268 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:46.268 Nvme0n1 : 5.05 1493.98 5.84 0.00 0.00 85408.28 20108.23 84222.97 00:11:46.268 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.268 Verification LBA range: start 0x0 length 0x4ff80 00:11:46.268 Nvme1n1p1 : 5.06 1518.10 5.93 0.00 0.00 83924.92 21476.86 83801.86 00:11:46.268 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.268 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:46.268 Nvme1n1p1 : 5.06 1493.28 5.83 0.00 0.00 85287.69 21161.02 80854.05 00:11:46.268 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.268 Verification LBA range: start 0x0 length 0x4ff7f 00:11:46.268 Nvme1n1p2 : 5.06 1517.36 5.93 0.00 0.00 83777.41 21266.30 85486.32 00:11:46.268 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.268 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:46.268 Nvme1n1p2 : 5.06 1492.84 5.83 0.00 0.00 85048.05 22424.37 82117.40 00:11:46.268 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.268 Verification LBA range: start 0x0 length 0x80000 00:11:46.268 Nvme2n1 : 5.06 1516.99 5.93 0.00 0.00 83675.24 20634.63 87170.78 00:11:46.268 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.268 Verification LBA range: start 0x80000 length 0x80000 00:11:46.268 Nvme2n1 : 5.08 1500.19 5.86 0.00 0.00 84480.37 4737.54 80011.82 00:11:46.268 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.269 Verification LBA range: start 0x0 length 0x80000 00:11:46.269 Nvme2n2 : 5.06 1516.60 5.92 0.00 0.00 83552.90 19792.40 88434.12 00:11:46.269 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.269 Verification LBA range: start 0x80000 length 0x80000 00:11:46.269 Nvme2n2 : 5.09 1508.87 5.89 0.00 0.00 83899.41 11264.82 78748.48 00:11:46.269 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.269 Verification LBA range: start 0x0 length 0x80000 00:11:46.269 Nvme2n3 : 5.07 1526.76 5.96 0.00 0.00 82972.05 3737.39 88013.01 00:11:46.269 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.269 Verification LBA range: start 0x80000 length 0x80000 00:11:46.269 Nvme2n3 : 5.09 1508.54 5.89 0.00 0.00 83791.82 11422.74 82117.40 00:11:46.269 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.269 Verification LBA range: start 0x0 length 0x20000 00:11:46.269 Nvme3n1 : 5.07 1526.21 5.96 0.00 0.00 82878.87 4421.71 91381.92 00:11:46.269 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.269 Verification LBA range: start 0x20000 length 0x20000 00:11:46.269 Nvme3n1 : 5.09 1508.21 5.89 0.00 0.00 83735.24 11054.27 82538.51 00:11:46.269 [2024-11-25T10:16:53.381Z] =================================================================================================================== 00:11:46.269 [2024-11-25T10:16:53.381Z] Total : 21147.27 82.61 0.00 0.00 84026.82 3737.39 91381.92 00:11:47.646 00:11:47.646 real 0m7.671s 00:11:47.646 user 0m14.153s 00:11:47.646 sys 0m0.316s 00:11:47.646 10:16:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.646 ************************************ 00:11:47.646 END TEST bdev_verify 00:11:47.646 10:16:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:47.646 ************************************ 00:11:47.646 10:16:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:47.646 10:16:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:47.646 10:16:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.646 10:16:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:47.646 ************************************ 00:11:47.646 START TEST bdev_verify_big_io 00:11:47.646 ************************************ 00:11:47.646 10:16:54 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:47.646 [2024-11-25 10:16:54.709754] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:11:47.646 [2024-11-25 10:16:54.709900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63166 ] 00:11:47.905 [2024-11-25 10:16:54.896524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:48.163 [2024-11-25 10:16:55.022987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.164 [2024-11-25 10:16:55.023035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.130 Running I/O for 5 seconds... 00:11:53.567 1953.00 IOPS, 122.06 MiB/s [2024-11-25T10:17:02.058Z] 3242.00 IOPS, 202.62 MiB/s [2024-11-25T10:17:02.058Z] 3635.33 IOPS, 227.21 MiB/s 00:11:54.946 Latency(us) 00:11:54.946 [2024-11-25T10:17:02.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.946 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x0 length 0xbd0b 00:11:54.946 Nvme0n1 : 5.60 154.67 9.67 0.00 0.00 785654.81 15581.25 889394.58 00:11:54.946 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:54.946 Nvme0n1 : 5.67 132.74 8.30 0.00 0.00 890158.41 44848.73 1482324.31 00:11:54.946 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x0 length 0x4ff8 00:11:54.946 Nvme1n1p1 : 5.60 151.02 9.44 0.00 0.00 800174.48 69905.07 1192597.28 00:11:54.946 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:54.946 Nvme1n1p1 : 5.70 152.14 9.51 0.00 0.00 756736.72 27161.91 825385.12 00:11:54.946 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x0 length 0x4ff7 00:11:54.946 Nvme1n1p2 : 5.66 154.14 9.63 0.00 0.00 768975.17 50954.90 1212810.80 00:11:54.946 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:54.946 Nvme1n1p2 : 5.70 157.17 9.82 0.00 0.00 720530.46 30741.38 852336.48 00:11:54.946 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x0 length 0x8000 00:11:54.946 Nvme2n1 : 5.72 159.78 9.99 0.00 0.00 726944.46 28635.81 1226286.47 00:11:54.946 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x8000 length 0x8000 00:11:54.946 Nvme2n1 : 5.74 173.96 10.87 0.00 0.00 639217.68 2145.05 875918.91 00:11:54.946 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x0 length 0x8000 00:11:54.946 Nvme2n2 : 5.74 165.11 10.32 0.00 0.00 689132.26 28846.37 1239762.15 00:11:54.946 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x8000 length 0x8000 00:11:54.946 Nvme2n2 : 5.59 141.40 8.84 0.00 0.00 870658.50 18634.33 815278.37 00:11:54.946 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x0 length 0x8000 00:11:54.946 Nvme2n3 : 5.74 168.36 10.52 0.00 0.00 660215.07 17476.27 1145432.42 00:11:54.946 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x8000 length 0x8000 00:11:54.946 Nvme2n3 : 5.66 141.25 8.83 0.00 0.00 865314.71 75800.67 771482.42 00:11:54.946 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x0 length 0x2000 00:11:54.946 Nvme3n1 : 5.82 206.92 12.93 0.00 0.00 529807.92 736.95 1280189.17 00:11:54.946 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.946 Verification LBA range: start 0x2000 length 0x2000 00:11:54.946 Nvme3n1 : 5.64 127.93 8.00 0.00 0.00 936115.26 45690.96 1650770.25 00:11:54.946 [2024-11-25T10:17:02.058Z] =================================================================================================================== 00:11:54.946 [2024-11-25T10:17:02.058Z] Total : 2186.59 136.66 0.00 0.00 746398.83 736.95 1650770.25 00:11:56.857 00:11:56.857 real 0m9.079s 00:11:56.857 user 0m16.905s 00:11:56.857 sys 0m0.353s 00:11:56.857 10:17:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.857 ************************************ 00:11:56.857 END TEST bdev_verify_big_io 00:11:56.857 10:17:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.858 ************************************ 00:11:56.858 10:17:03 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:56.858 10:17:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:56.858 10:17:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.858 10:17:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:56.858 ************************************ 00:11:56.858 START TEST bdev_write_zeroes 00:11:56.858 ************************************ 00:11:56.858 10:17:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:56.858 [2024-11-25 10:17:03.858291] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:11:56.858 [2024-11-25 10:17:03.858421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63276 ] 00:11:57.117 [2024-11-25 10:17:04.038832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.117 [2024-11-25 10:17:04.158739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.051 Running I/O for 1 seconds... 00:11:58.983 63616.00 IOPS, 248.50 MiB/s 00:11:58.983 Latency(us) 00:11:58.983 [2024-11-25T10:17:06.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.983 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.983 Nvme0n1 : 1.02 9055.30 35.37 0.00 0.00 14089.46 12212.33 33689.19 00:11:58.983 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.983 Nvme1n1p1 : 1.03 9045.01 35.33 0.00 0.00 14087.04 12054.41 34952.53 00:11:58.983 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.983 Nvme1n1p2 : 1.03 9035.32 35.29 0.00 0.00 14054.67 11896.49 33268.07 00:11:58.983 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.983 Nvme2n1 : 1.03 9077.15 35.46 0.00 0.00 13913.36 7580.07 25898.56 00:11:58.983 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.983 Nvme2n2 : 1.03 9068.33 35.42 0.00 0.00 13883.48 7685.35 23687.71 00:11:58.983 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.983 Nvme2n3 : 1.03 9060.01 35.39 0.00 0.00 13862.78 7737.99 23371.87 00:11:58.983 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.983 Nvme3n1 : 1.03 9051.71 35.36 0.00 0.00 13844.09 7895.90 23792.99 00:11:58.983 [2024-11-25T10:17:06.095Z] =================================================================================================================== 00:11:58.983 [2024-11-25T10:17:06.095Z] Total : 63392.84 247.63 0.00 0.00 13961.79 7580.07 34952.53 00:12:00.355 00:12:00.355 real 0m3.297s 00:12:00.355 user 0m2.905s 00:12:00.355 sys 0m0.274s 00:12:00.355 10:17:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.355 10:17:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:00.355 ************************************ 00:12:00.355 END TEST bdev_write_zeroes 00:12:00.355 ************************************ 00:12:00.355 10:17:07 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:00.355 10:17:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:00.355 10:17:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.355 10:17:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:00.355 ************************************ 00:12:00.355 START TEST bdev_json_nonenclosed 00:12:00.355 ************************************ 00:12:00.355 10:17:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:00.355 [2024-11-25 10:17:07.220411] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:12:00.355 [2024-11-25 10:17:07.220564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63335 ] 00:12:00.355 [2024-11-25 10:17:07.401922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.612 [2024-11-25 10:17:07.519842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.612 [2024-11-25 10:17:07.519949] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:00.612 [2024-11-25 10:17:07.519972] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:00.612 [2024-11-25 10:17:07.519985] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:00.870 00:12:00.870 real 0m0.647s 00:12:00.870 user 0m0.403s 00:12:00.870 sys 0m0.140s 00:12:00.870 10:17:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.870 10:17:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:00.870 ************************************ 00:12:00.870 END TEST bdev_json_nonenclosed 00:12:00.870 ************************************ 00:12:00.870 10:17:07 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:00.870 10:17:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:00.870 10:17:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.870 10:17:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:00.870 ************************************ 00:12:00.870 START TEST bdev_json_nonarray 00:12:00.870 ************************************ 00:12:00.870 10:17:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:00.870 [2024-11-25 10:17:07.941676] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:12:00.870 [2024-11-25 10:17:07.941804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63355 ] 00:12:01.127 [2024-11-25 10:17:08.119658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.455 [2024-11-25 10:17:08.242112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.455 [2024-11-25 10:17:08.242228] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:01.455 [2024-11-25 10:17:08.242251] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:01.455 [2024-11-25 10:17:08.242264] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:01.455 00:12:01.455 real 0m0.655s 00:12:01.455 user 0m0.416s 00:12:01.455 sys 0m0.133s 00:12:01.455 10:17:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.455 10:17:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:01.455 ************************************ 00:12:01.455 END TEST bdev_json_nonarray 00:12:01.455 ************************************ 00:12:01.455 10:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:12:01.455 10:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:12:01.455 10:17:08 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:12:01.455 10:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.455 10:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.455 10:17:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:01.714 ************************************ 00:12:01.714 START TEST bdev_gpt_uuid 00:12:01.714 ************************************ 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63386 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63386 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63386 ']' 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.714 10:17:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:01.714 [2024-11-25 10:17:08.683899] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:12:01.714 [2024-11-25 10:17:08.684039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63386 ] 00:12:01.971 [2024-11-25 10:17:08.863117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.971 [2024-11-25 10:17:08.985400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.959 10:17:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.959 10:17:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:12:02.959 10:17:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:02.959 10:17:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.959 10:17:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:03.217 Some configs were skipped because the RPC state that can call them passed over. 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:12:03.217 { 00:12:03.217 "name": "Nvme1n1p1", 00:12:03.217 "aliases": [ 00:12:03.217 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:12:03.217 ], 00:12:03.217 "product_name": "GPT Disk", 00:12:03.217 "block_size": 4096, 00:12:03.217 "num_blocks": 655104, 00:12:03.217 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:03.217 "assigned_rate_limits": { 00:12:03.217 "rw_ios_per_sec": 0, 00:12:03.217 "rw_mbytes_per_sec": 0, 00:12:03.217 "r_mbytes_per_sec": 0, 00:12:03.217 "w_mbytes_per_sec": 0 00:12:03.217 }, 00:12:03.217 "claimed": false, 00:12:03.217 "zoned": false, 00:12:03.217 "supported_io_types": { 00:12:03.217 "read": true, 00:12:03.217 "write": true, 00:12:03.217 "unmap": true, 00:12:03.217 "flush": true, 00:12:03.217 "reset": true, 00:12:03.217 "nvme_admin": false, 00:12:03.217 "nvme_io": false, 00:12:03.217 "nvme_io_md": false, 00:12:03.217 "write_zeroes": true, 00:12:03.217 "zcopy": false, 00:12:03.217 "get_zone_info": false, 00:12:03.217 "zone_management": false, 00:12:03.217 "zone_append": false, 00:12:03.217 "compare": true, 00:12:03.217 "compare_and_write": false, 00:12:03.217 "abort": true, 00:12:03.217 "seek_hole": false, 00:12:03.217 "seek_data": false, 00:12:03.217 "copy": true, 00:12:03.217 "nvme_iov_md": false 00:12:03.217 }, 00:12:03.217 "driver_specific": { 00:12:03.217 "gpt": { 00:12:03.217 "base_bdev": "Nvme1n1", 00:12:03.217 "offset_blocks": 256, 00:12:03.217 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:12:03.217 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:03.217 "partition_name": "SPDK_TEST_first" 00:12:03.217 } 00:12:03.217 } 00:12:03.217 } 00:12:03.217 ]' 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:03.217 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:03.475 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:03.475 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:03.475 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.475 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:03.475 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.475 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:12:03.475 { 00:12:03.475 "name": "Nvme1n1p2", 00:12:03.475 "aliases": [ 00:12:03.475 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:12:03.475 ], 00:12:03.475 "product_name": "GPT Disk", 00:12:03.475 "block_size": 4096, 00:12:03.475 "num_blocks": 655103, 00:12:03.475 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:03.475 "assigned_rate_limits": { 00:12:03.475 "rw_ios_per_sec": 0, 00:12:03.475 "rw_mbytes_per_sec": 0, 00:12:03.475 "r_mbytes_per_sec": 0, 00:12:03.475 "w_mbytes_per_sec": 0 00:12:03.475 }, 00:12:03.475 "claimed": false, 00:12:03.476 "zoned": false, 00:12:03.476 "supported_io_types": { 00:12:03.476 "read": true, 00:12:03.476 "write": true, 00:12:03.476 "unmap": true, 00:12:03.476 "flush": true, 00:12:03.476 "reset": true, 00:12:03.476 "nvme_admin": false, 00:12:03.476 "nvme_io": false, 00:12:03.476 "nvme_io_md": false, 00:12:03.476 "write_zeroes": true, 00:12:03.476 "zcopy": false, 00:12:03.476 "get_zone_info": false, 00:12:03.476 "zone_management": false, 00:12:03.476 "zone_append": false, 00:12:03.476 "compare": true, 00:12:03.476 "compare_and_write": false, 00:12:03.476 "abort": true, 00:12:03.476 "seek_hole": false, 00:12:03.476 "seek_data": false, 00:12:03.476 "copy": true, 00:12:03.476 "nvme_iov_md": false 00:12:03.476 }, 00:12:03.476 "driver_specific": { 00:12:03.476 "gpt": { 00:12:03.476 "base_bdev": "Nvme1n1", 00:12:03.476 "offset_blocks": 655360, 00:12:03.476 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:12:03.476 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:03.476 "partition_name": "SPDK_TEST_second" 00:12:03.476 } 00:12:03.476 } 00:12:03.476 } 00:12:03.476 ]' 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63386 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63386 ']' 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63386 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63386 00:12:03.476 killing process with pid 63386 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63386' 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63386 00:12:03.476 10:17:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63386 00:12:06.005 00:12:06.005 real 0m4.432s 00:12:06.005 user 0m4.513s 00:12:06.005 sys 0m0.561s 00:12:06.005 10:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.005 ************************************ 00:12:06.005 END TEST bdev_gpt_uuid 00:12:06.005 ************************************ 00:12:06.005 10:17:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:06.005 10:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:12:06.005 10:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:06.006 10:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:12:06.006 10:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:06.006 10:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:06.006 10:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:12:06.006 10:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:12:06.006 10:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:12:06.006 10:17:13 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:06.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:06.830 Waiting for block devices as requested 00:12:06.830 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.088 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.088 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.347 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.615 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:12.615 10:17:19 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:12:12.615 10:17:19 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:12:12.615 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:12.615 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:12.615 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:12.615 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:12.615 10:17:19 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:12:12.615 ************************************ 00:12:12.615 END TEST blockdev_nvme_gpt 00:12:12.615 ************************************ 00:12:12.615 00:12:12.615 real 1m5.409s 00:12:12.615 user 1m21.305s 00:12:12.615 sys 0m12.103s 00:12:12.615 10:17:19 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.615 10:17:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:12.615 10:17:19 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:12.615 10:17:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.615 10:17:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.615 10:17:19 -- common/autotest_common.sh@10 -- # set +x 00:12:12.615 ************************************ 00:12:12.615 START TEST nvme 00:12:12.615 ************************************ 00:12:12.615 10:17:19 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:12.874 * Looking for test storage... 00:12:12.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:12.874 10:17:19 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:12.874 10:17:19 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:12.874 10:17:19 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:12.874 10:17:19 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:12.874 10:17:19 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.874 10:17:19 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.874 10:17:19 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.874 10:17:19 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.874 10:17:19 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.874 10:17:19 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.874 10:17:19 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.874 10:17:19 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.874 10:17:19 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.874 10:17:19 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.874 10:17:19 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.874 10:17:19 nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:12.874 10:17:19 nvme -- scripts/common.sh@345 -- # : 1 00:12:12.874 10:17:19 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.874 10:17:19 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.874 10:17:19 nvme -- scripts/common.sh@365 -- # decimal 1 00:12:12.874 10:17:19 nvme -- scripts/common.sh@353 -- # local d=1 00:12:12.874 10:17:19 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.874 10:17:19 nvme -- scripts/common.sh@355 -- # echo 1 00:12:12.874 10:17:19 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.874 10:17:19 nvme -- scripts/common.sh@366 -- # decimal 2 00:12:12.874 10:17:19 nvme -- scripts/common.sh@353 -- # local d=2 00:12:12.874 10:17:19 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.874 10:17:19 nvme -- scripts/common.sh@355 -- # echo 2 00:12:12.874 10:17:19 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.874 10:17:19 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.874 10:17:19 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.874 10:17:19 nvme -- scripts/common.sh@368 -- # return 0 00:12:12.874 10:17:19 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.874 10:17:19 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:12.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.874 --rc genhtml_branch_coverage=1 00:12:12.874 --rc genhtml_function_coverage=1 00:12:12.874 --rc genhtml_legend=1 00:12:12.874 --rc geninfo_all_blocks=1 00:12:12.874 --rc geninfo_unexecuted_blocks=1 00:12:12.874 00:12:12.874 ' 00:12:12.874 10:17:19 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:12.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.874 --rc genhtml_branch_coverage=1 00:12:12.874 --rc genhtml_function_coverage=1 00:12:12.874 --rc genhtml_legend=1 00:12:12.874 --rc geninfo_all_blocks=1 00:12:12.874 --rc geninfo_unexecuted_blocks=1 00:12:12.874 00:12:12.874 ' 00:12:12.874 10:17:19 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:12.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.874 --rc genhtml_branch_coverage=1 00:12:12.874 --rc genhtml_function_coverage=1 00:12:12.874 --rc genhtml_legend=1 00:12:12.874 --rc geninfo_all_blocks=1 00:12:12.874 --rc geninfo_unexecuted_blocks=1 00:12:12.874 00:12:12.874 ' 00:12:12.874 10:17:19 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:12.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.874 --rc genhtml_branch_coverage=1 00:12:12.874 --rc genhtml_function_coverage=1 00:12:12.874 --rc genhtml_legend=1 00:12:12.874 --rc geninfo_all_blocks=1 00:12:12.874 --rc geninfo_unexecuted_blocks=1 00:12:12.874 00:12:12.874 ' 00:12:12.874 10:17:19 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:13.808 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:14.374 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.374 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.374 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.374 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:14.632 10:17:21 nvme -- nvme/nvme.sh@79 -- # uname 00:12:14.632 10:17:21 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:12:14.632 10:17:21 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:12:14.632 10:17:21 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:12:14.632 10:17:21 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:12:14.632 10:17:21 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:12:14.632 10:17:21 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:12:14.632 10:17:21 nvme -- common/autotest_common.sh@1075 -- # stubpid=64049 00:12:14.632 10:17:21 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:12:14.632 Waiting for stub to ready for secondary processes... 00:12:14.632 10:17:21 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:12:14.632 10:17:21 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:14.632 10:17:21 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64049 ]] 00:12:14.632 10:17:21 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:14.632 [2024-11-25 10:17:21.570109] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:12:14.632 [2024-11-25 10:17:21.570248] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:12:15.566 10:17:22 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:15.566 10:17:22 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64049 ]] 00:12:15.566 10:17:22 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:15.566 [2024-11-25 10:17:22.610371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:15.826 [2024-11-25 10:17:22.723449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.826 [2024-11-25 10:17:22.723594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.826 [2024-11-25 10:17:22.723626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.826 [2024-11-25 10:17:22.741414] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:12:15.826 [2024-11-25 10:17:22.741456] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:15.826 [2024-11-25 10:17:22.758384] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:15.826 [2024-11-25 10:17:22.758537] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:15.826 [2024-11-25 10:17:22.762370] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:15.826 [2024-11-25 10:17:22.762813] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:12:15.826 [2024-11-25 10:17:22.762917] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:12:15.826 [2024-11-25 10:17:22.765384] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:15.826 [2024-11-25 10:17:22.765721] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:12:15.826 [2024-11-25 10:17:22.765826] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:12:15.826 [2024-11-25 10:17:22.768415] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:15.826 [2024-11-25 10:17:22.768637] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:12:15.826 [2024-11-25 10:17:22.768897] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:12:15.826 [2024-11-25 10:17:22.768972] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:12:15.826 [2024-11-25 10:17:22.769048] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:12:16.475 done. 00:12:16.475 10:17:23 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:16.475 10:17:23 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:12:16.475 10:17:23 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:16.475 10:17:23 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:12:16.475 10:17:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.475 10:17:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.475 ************************************ 00:12:16.475 START TEST nvme_reset 00:12:16.475 ************************************ 00:12:16.475 10:17:23 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:16.734 Initializing NVMe Controllers 00:12:16.734 Skipping QEMU NVMe SSD at 0000:00:10.0 00:12:16.735 Skipping QEMU NVMe SSD at 0000:00:11.0 00:12:16.735 Skipping QEMU NVMe SSD at 0000:00:13.0 00:12:16.735 Skipping QEMU NVMe SSD at 0000:00:12.0 00:12:16.735 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:12:16.735 00:12:16.735 real 0m0.278s 00:12:16.735 user 0m0.100s 00:12:16.735 sys 0m0.132s 00:12:16.735 10:17:23 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.735 ************************************ 00:12:16.735 END TEST nvme_reset 00:12:16.735 ************************************ 00:12:16.735 10:17:23 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:12:16.994 10:17:23 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:12:16.994 10:17:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.994 10:17:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.994 10:17:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.994 ************************************ 00:12:16.994 START TEST nvme_identify 00:12:16.994 ************************************ 00:12:16.994 10:17:23 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:12:16.994 10:17:23 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:12:16.994 10:17:23 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:12:16.994 10:17:23 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:12:16.994 10:17:23 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:12:16.994 10:17:23 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:16.994 10:17:23 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:12:16.994 10:17:23 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:16.994 10:17:23 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:16.994 10:17:23 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:16.994 10:17:23 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:16.994 10:17:23 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:16.994 10:17:23 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:12:17.255 [2024-11-25 10:17:24.250703] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64083 terminated unexpected 00:12:17.255 ===================================================== 00:12:17.255 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:17.255 ===================================================== 00:12:17.255 Controller Capabilities/Features 00:12:17.255 ================================ 00:12:17.255 Vendor ID: 1b36 00:12:17.255 Subsystem Vendor ID: 1af4 00:12:17.255 Serial Number: 12340 00:12:17.255 Model Number: QEMU NVMe Ctrl 00:12:17.255 Firmware Version: 8.0.0 00:12:17.255 Recommended Arb Burst: 6 00:12:17.255 IEEE OUI Identifier: 00 54 52 00:12:17.255 Multi-path I/O 00:12:17.255 May have multiple subsystem ports: No 00:12:17.255 May have multiple controllers: No 00:12:17.255 Associated with SR-IOV VF: No 00:12:17.255 Max Data Transfer Size: 524288 00:12:17.255 Max Number of Namespaces: 256 00:12:17.255 Max Number of I/O Queues: 64 00:12:17.255 NVMe Specification Version (VS): 1.4 00:12:17.255 NVMe Specification Version (Identify): 1.4 00:12:17.255 Maximum Queue Entries: 2048 00:12:17.255 Contiguous Queues Required: Yes 00:12:17.255 Arbitration Mechanisms Supported 00:12:17.255 Weighted Round Robin: Not Supported 00:12:17.255 Vendor Specific: Not Supported 00:12:17.255 Reset Timeout: 7500 ms 00:12:17.255 Doorbell Stride: 4 bytes 00:12:17.255 NVM Subsystem Reset: Not Supported 00:12:17.255 Command Sets Supported 00:12:17.255 NVM Command Set: Supported 00:12:17.255 Boot Partition: Not Supported 00:12:17.255 Memory Page Size Minimum: 4096 bytes 00:12:17.255 Memory Page Size Maximum: 65536 bytes 00:12:17.255 Persistent Memory Region: Not Supported 00:12:17.255 Optional Asynchronous Events Supported 00:12:17.255 Namespace Attribute Notices: Supported 00:12:17.255 Firmware Activation Notices: Not Supported 00:12:17.256 ANA Change Notices: Not Supported 00:12:17.256 PLE Aggregate Log Change Notices: Not Supported 00:12:17.256 LBA Status Info Alert Notices: Not Supported 00:12:17.256 EGE Aggregate Log Change Notices: Not Supported 00:12:17.256 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.256 Zone Descriptor Change Notices: Not Supported 00:12:17.256 Discovery Log Change Notices: Not Supported 00:12:17.256 Controller Attributes 00:12:17.256 128-bit Host Identifier: Not Supported 00:12:17.256 Non-Operational Permissive Mode: Not Supported 00:12:17.256 NVM Sets: Not Supported 00:12:17.256 Read Recovery Levels: Not Supported 00:12:17.256 Endurance Groups: Not Supported 00:12:17.256 Predictable Latency Mode: Not Supported 00:12:17.256 Traffic Based Keep ALive: Not Supported 00:12:17.256 Namespace Granularity: Not Supported 00:12:17.256 SQ Associations: Not Supported 00:12:17.256 UUID List: Not Supported 00:12:17.256 Multi-Domain Subsystem: Not Supported 00:12:17.256 Fixed Capacity Management: Not Supported 00:12:17.256 Variable Capacity Management: Not Supported 00:12:17.256 Delete Endurance Group: Not Supported 00:12:17.256 Delete NVM Set: Not Supported 00:12:17.256 Extended LBA Formats Supported: Supported 00:12:17.256 Flexible Data Placement Supported: Not Supported 00:12:17.256 00:12:17.256 Controller Memory Buffer Support 00:12:17.256 ================================ 00:12:17.256 Supported: No 00:12:17.256 00:12:17.256 Persistent Memory Region Support 00:12:17.256 ================================ 00:12:17.256 Supported: No 00:12:17.256 00:12:17.256 Admin Command Set Attributes 00:12:17.256 ============================ 00:12:17.256 Security Send/Receive: Not Supported 00:12:17.256 Format NVM: Supported 00:12:17.256 Firmware Activate/Download: Not Supported 00:12:17.256 Namespace Management: Supported 00:12:17.256 Device Self-Test: Not Supported 00:12:17.256 Directives: Supported 00:12:17.256 NVMe-MI: Not Supported 00:12:17.256 Virtualization Management: Not Supported 00:12:17.256 Doorbell Buffer Config: Supported 00:12:17.256 Get LBA Status Capability: Not Supported 00:12:17.256 Command & Feature Lockdown Capability: Not Supported 00:12:17.256 Abort Command Limit: 4 00:12:17.256 Async Event Request Limit: 4 00:12:17.256 Number of Firmware Slots: N/A 00:12:17.256 Firmware Slot 1 Read-Only: N/A 00:12:17.256 Firmware Activation Without Reset: N/A 00:12:17.256 Multiple Update Detection Support: N/A 00:12:17.256 Firmware Update Granularity: No Information Provided 00:12:17.256 Per-Namespace SMART Log: Yes 00:12:17.256 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.256 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:17.256 Command Effects Log Page: Supported 00:12:17.256 Get Log Page Extended Data: Supported 00:12:17.256 Telemetry Log Pages: Not Supported 00:12:17.256 Persistent Event Log Pages: Not Supported 00:12:17.256 Supported Log Pages Log Page: May Support 00:12:17.256 Commands Supported & Effects Log Page: Not Supported 00:12:17.256 Feature Identifiers & Effects Log Page:May Support 00:12:17.256 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.256 Data Area 4 for Telemetry Log: Not Supported 00:12:17.256 Error Log Page Entries Supported: 1 00:12:17.256 Keep Alive: Not Supported 00:12:17.256 00:12:17.256 NVM Command Set Attributes 00:12:17.256 ========================== 00:12:17.256 Submission Queue Entry Size 00:12:17.256 Max: 64 00:12:17.256 Min: 64 00:12:17.256 Completion Queue Entry Size 00:12:17.256 Max: 16 00:12:17.256 Min: 16 00:12:17.256 Number of Namespaces: 256 00:12:17.256 Compare Command: Supported 00:12:17.256 Write Uncorrectable Command: Not Supported 00:12:17.256 Dataset Management Command: Supported 00:12:17.256 Write Zeroes Command: Supported 00:12:17.256 Set Features Save Field: Supported 00:12:17.256 Reservations: Not Supported 00:12:17.256 Timestamp: Supported 00:12:17.256 Copy: Supported 00:12:17.256 Volatile Write Cache: Present 00:12:17.256 Atomic Write Unit (Normal): 1 00:12:17.256 Atomic Write Unit (PFail): 1 00:12:17.256 Atomic Compare & Write Unit: 1 00:12:17.256 Fused Compare & Write: Not Supported 00:12:17.256 Scatter-Gather List 00:12:17.256 SGL Command Set: Supported 00:12:17.256 SGL Keyed: Not Supported 00:12:17.256 SGL Bit Bucket Descriptor: Not Supported 00:12:17.256 SGL Metadata Pointer: Not Supported 00:12:17.256 Oversized SGL: Not Supported 00:12:17.256 SGL Metadata Address: Not Supported 00:12:17.256 SGL Offset: Not Supported 00:12:17.256 Transport SGL Data Block: Not Supported 00:12:17.256 Replay Protected Memory Block: Not Supported 00:12:17.256 00:12:17.256 Firmware Slot Information 00:12:17.256 ========================= 00:12:17.256 Active slot: 1 00:12:17.256 Slot 1 Firmware Revision: 1.0 00:12:17.256 00:12:17.256 00:12:17.256 Commands Supported and Effects 00:12:17.256 ============================== 00:12:17.256 Admin Commands 00:12:17.256 -------------- 00:12:17.256 Delete I/O Submission Queue (00h): Supported 00:12:17.256 Create I/O Submission Queue (01h): Supported 00:12:17.256 Get Log Page (02h): Supported 00:12:17.256 Delete I/O Completion Queue (04h): Supported 00:12:17.256 Create I/O Completion Queue (05h): Supported 00:12:17.256 Identify (06h): Supported 00:12:17.256 Abort (08h): Supported 00:12:17.256 Set Features (09h): Supported 00:12:17.256 Get Features (0Ah): Supported 00:12:17.256 Asynchronous Event Request (0Ch): Supported 00:12:17.256 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.256 Directive Send (19h): Supported 00:12:17.256 Directive Receive (1Ah): Supported 00:12:17.256 Virtualization Management (1Ch): Supported 00:12:17.256 Doorbell Buffer Config (7Ch): Supported 00:12:17.256 Format NVM (80h): Supported LBA-Change 00:12:17.256 I/O Commands 00:12:17.256 ------------ 00:12:17.256 Flush (00h): Supported LBA-Change 00:12:17.256 Write (01h): Supported LBA-Change 00:12:17.256 Read (02h): Supported 00:12:17.256 Compare (05h): Supported 00:12:17.256 Write Zeroes (08h): Supported LBA-Change 00:12:17.256 Dataset Management (09h): Supported LBA-Change 00:12:17.256 Unknown (0Ch): Supported 00:12:17.256 Unknown (12h): Supported 00:12:17.256 Copy (19h): Supported LBA-Change 00:12:17.256 Unknown (1Dh): Supported LBA-Change 00:12:17.256 00:12:17.256 Error Log 00:12:17.256 ========= 00:12:17.256 00:12:17.256 Arbitration 00:12:17.256 =========== 00:12:17.256 Arbitration Burst: no limit 00:12:17.256 00:12:17.256 Power Management 00:12:17.256 ================ 00:12:17.256 Number of Power States: 1 00:12:17.256 Current Power State: Power State #0 00:12:17.256 Power State #0: 00:12:17.256 Max Power: 25.00 W 00:12:17.256 Non-Operational State: Operational 00:12:17.256 Entry Latency: 16 microseconds 00:12:17.256 Exit Latency: 4 microseconds 00:12:17.256 Relative Read Throughput: 0 00:12:17.256 Relative Read Latency: 0 00:12:17.256 Relative Write Throughput: 0 00:12:17.256 Relative Write Latency: 0 00:12:17.256 Idle Power[2024-11-25 10:17:24.252194] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64083 terminated unexpected 00:12:17.256 : Not Reported 00:12:17.256 Active Power: Not Reported 00:12:17.256 Non-Operational Permissive Mode: Not Supported 00:12:17.256 00:12:17.256 Health Information 00:12:17.256 ================== 00:12:17.256 Critical Warnings: 00:12:17.256 Available Spare Space: OK 00:12:17.256 Temperature: OK 00:12:17.256 Device Reliability: OK 00:12:17.256 Read Only: No 00:12:17.256 Volatile Memory Backup: OK 00:12:17.256 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.256 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.256 Available Spare: 0% 00:12:17.256 Available Spare Threshold: 0% 00:12:17.256 Life Percentage Used: 0% 00:12:17.256 Data Units Read: 774 00:12:17.256 Data Units Written: 702 00:12:17.256 Host Read Commands: 36662 00:12:17.256 Host Write Commands: 36448 00:12:17.256 Controller Busy Time: 0 minutes 00:12:17.256 Power Cycles: 0 00:12:17.256 Power On Hours: 0 hours 00:12:17.256 Unsafe Shutdowns: 0 00:12:17.256 Unrecoverable Media Errors: 0 00:12:17.256 Lifetime Error Log Entries: 0 00:12:17.256 Warning Temperature Time: 0 minutes 00:12:17.256 Critical Temperature Time: 0 minutes 00:12:17.256 00:12:17.256 Number of Queues 00:12:17.256 ================ 00:12:17.256 Number of I/O Submission Queues: 64 00:12:17.256 Number of I/O Completion Queues: 64 00:12:17.256 00:12:17.256 ZNS Specific Controller Data 00:12:17.256 ============================ 00:12:17.256 Zone Append Size Limit: 0 00:12:17.256 00:12:17.256 00:12:17.256 Active Namespaces 00:12:17.256 ================= 00:12:17.256 Namespace ID:1 00:12:17.256 Error Recovery Timeout: Unlimited 00:12:17.256 Command Set Identifier: NVM (00h) 00:12:17.257 Deallocate: Supported 00:12:17.257 Deallocated/Unwritten Error: Supported 00:12:17.257 Deallocated Read Value: All 0x00 00:12:17.257 Deallocate in Write Zeroes: Not Supported 00:12:17.257 Deallocated Guard Field: 0xFFFF 00:12:17.257 Flush: Supported 00:12:17.257 Reservation: Not Supported 00:12:17.257 Metadata Transferred as: Separate Metadata Buffer 00:12:17.257 Namespace Sharing Capabilities: Private 00:12:17.257 Size (in LBAs): 1548666 (5GiB) 00:12:17.257 Capacity (in LBAs): 1548666 (5GiB) 00:12:17.257 Utilization (in LBAs): 1548666 (5GiB) 00:12:17.257 Thin Provisioning: Not Supported 00:12:17.257 Per-NS Atomic Units: No 00:12:17.257 Maximum Single Source Range Length: 128 00:12:17.257 Maximum Copy Length: 128 00:12:17.257 Maximum Source Range Count: 128 00:12:17.257 NGUID/EUI64 Never Reused: No 00:12:17.257 Namespace Write Protected: No 00:12:17.257 Number of LBA Formats: 8 00:12:17.257 Current LBA Format: LBA Format #07 00:12:17.257 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.257 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.257 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.257 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.257 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.257 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.257 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.257 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.257 00:12:17.257 NVM Specific Namespace Data 00:12:17.257 =========================== 00:12:17.257 Logical Block Storage Tag Mask: 0 00:12:17.257 Protection Information Capabilities: 00:12:17.257 16b Guard Protection Information Storage Tag Support: No 00:12:17.257 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.257 Storage Tag Check Read Support: No 00:12:17.257 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.257 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.257 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.257 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.257 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.257 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.257 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.257 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.257 ===================================================== 00:12:17.257 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:17.257 ===================================================== 00:12:17.257 Controller Capabilities/Features 00:12:17.257 ================================ 00:12:17.257 Vendor ID: 1b36 00:12:17.257 Subsystem Vendor ID: 1af4 00:12:17.257 Serial Number: 12341 00:12:17.257 Model Number: QEMU NVMe Ctrl 00:12:17.257 Firmware Version: 8.0.0 00:12:17.257 Recommended Arb Burst: 6 00:12:17.257 IEEE OUI Identifier: 00 54 52 00:12:17.257 Multi-path I/O 00:12:17.257 May have multiple subsystem ports: No 00:12:17.257 May have multiple controllers: No 00:12:17.257 Associated with SR-IOV VF: No 00:12:17.257 Max Data Transfer Size: 524288 00:12:17.257 Max Number of Namespaces: 256 00:12:17.257 Max Number of I/O Queues: 64 00:12:17.257 NVMe Specification Version (VS): 1.4 00:12:17.257 NVMe Specification Version (Identify): 1.4 00:12:17.257 Maximum Queue Entries: 2048 00:12:17.257 Contiguous Queues Required: Yes 00:12:17.257 Arbitration Mechanisms Supported 00:12:17.257 Weighted Round Robin: Not Supported 00:12:17.257 Vendor Specific: Not Supported 00:12:17.257 Reset Timeout: 7500 ms 00:12:17.257 Doorbell Stride: 4 bytes 00:12:17.257 NVM Subsystem Reset: Not Supported 00:12:17.257 Command Sets Supported 00:12:17.257 NVM Command Set: Supported 00:12:17.257 Boot Partition: Not Supported 00:12:17.257 Memory Page Size Minimum: 4096 bytes 00:12:17.257 Memory Page Size Maximum: 65536 bytes 00:12:17.257 Persistent Memory Region: Not Supported 00:12:17.257 Optional Asynchronous Events Supported 00:12:17.257 Namespace Attribute Notices: Supported 00:12:17.257 Firmware Activation Notices: Not Supported 00:12:17.257 ANA Change Notices: Not Supported 00:12:17.257 PLE Aggregate Log Change Notices: Not Supported 00:12:17.257 LBA Status Info Alert Notices: Not Supported 00:12:17.257 EGE Aggregate Log Change Notices: Not Supported 00:12:17.257 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.257 Zone Descriptor Change Notices: Not Supported 00:12:17.257 Discovery Log Change Notices: Not Supported 00:12:17.257 Controller Attributes 00:12:17.257 128-bit Host Identifier: Not Supported 00:12:17.257 Non-Operational Permissive Mode: Not Supported 00:12:17.257 NVM Sets: Not Supported 00:12:17.257 Read Recovery Levels: Not Supported 00:12:17.257 Endurance Groups: Not Supported 00:12:17.257 Predictable Latency Mode: Not Supported 00:12:17.257 Traffic Based Keep ALive: Not Supported 00:12:17.257 Namespace Granularity: Not Supported 00:12:17.257 SQ Associations: Not Supported 00:12:17.257 UUID List: Not Supported 00:12:17.257 Multi-Domain Subsystem: Not Supported 00:12:17.257 Fixed Capacity Management: Not Supported 00:12:17.257 Variable Capacity Management: Not Supported 00:12:17.257 Delete Endurance Group: Not Supported 00:12:17.257 Delete NVM Set: Not Supported 00:12:17.257 Extended LBA Formats Supported: Supported 00:12:17.257 Flexible Data Placement Supported: Not Supported 00:12:17.257 00:12:17.257 Controller Memory Buffer Support 00:12:17.257 ================================ 00:12:17.257 Supported: No 00:12:17.257 00:12:17.257 Persistent Memory Region Support 00:12:17.257 ================================ 00:12:17.257 Supported: No 00:12:17.257 00:12:17.257 Admin Command Set Attributes 00:12:17.257 ============================ 00:12:17.257 Security Send/Receive: Not Supported 00:12:17.257 Format NVM: Supported 00:12:17.257 Firmware Activate/Download: Not Supported 00:12:17.257 Namespace Management: Supported 00:12:17.257 Device Self-Test: Not Supported 00:12:17.257 Directives: Supported 00:12:17.257 NVMe-MI: Not Supported 00:12:17.257 Virtualization Management: Not Supported 00:12:17.257 Doorbell Buffer Config: Supported 00:12:17.257 Get LBA Status Capability: Not Supported 00:12:17.257 Command & Feature Lockdown Capability: Not Supported 00:12:17.257 Abort Command Limit: 4 00:12:17.257 Async Event Request Limit: 4 00:12:17.257 Number of Firmware Slots: N/A 00:12:17.257 Firmware Slot 1 Read-Only: N/A 00:12:17.257 Firmware Activation Without Reset: N/A 00:12:17.257 Multiple Update Detection Support: N/A 00:12:17.257 Firmware Update Granularity: No Information Provided 00:12:17.257 Per-Namespace SMART Log: Yes 00:12:17.257 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.257 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:17.257 Command Effects Log Page: Supported 00:12:17.257 Get Log Page Extended Data: Supported 00:12:17.257 Telemetry Log Pages: Not Supported 00:12:17.257 Persistent Event Log Pages: Not Supported 00:12:17.257 Supported Log Pages Log Page: May Support 00:12:17.257 Commands Supported & Effects Log Page: Not Supported 00:12:17.257 Feature Identifiers & Effects Log Page:May Support 00:12:17.257 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.257 Data Area 4 for Telemetry Log: Not Supported 00:12:17.257 Error Log Page Entries Supported: 1 00:12:17.257 Keep Alive: Not Supported 00:12:17.257 00:12:17.257 NVM Command Set Attributes 00:12:17.257 ========================== 00:12:17.257 Submission Queue Entry Size 00:12:17.257 Max: 64 00:12:17.257 Min: 64 00:12:17.257 Completion Queue Entry Size 00:12:17.257 Max: 16 00:12:17.257 Min: 16 00:12:17.257 Number of Namespaces: 256 00:12:17.257 Compare Command: Supported 00:12:17.257 Write Uncorrectable Command: Not Supported 00:12:17.257 Dataset Management Command: Supported 00:12:17.257 Write Zeroes Command: Supported 00:12:17.257 Set Features Save Field: Supported 00:12:17.257 Reservations: Not Supported 00:12:17.257 Timestamp: Supported 00:12:17.257 Copy: Supported 00:12:17.257 Volatile Write Cache: Present 00:12:17.257 Atomic Write Unit (Normal): 1 00:12:17.257 Atomic Write Unit (PFail): 1 00:12:17.257 Atomic Compare & Write Unit: 1 00:12:17.257 Fused Compare & Write: Not Supported 00:12:17.257 Scatter-Gather List 00:12:17.257 SGL Command Set: Supported 00:12:17.257 SGL Keyed: Not Supported 00:12:17.257 SGL Bit Bucket Descriptor: Not Supported 00:12:17.257 SGL Metadata Pointer: Not Supported 00:12:17.257 Oversized SGL: Not Supported 00:12:17.257 SGL Metadata Address: Not Supported 00:12:17.257 SGL Offset: Not Supported 00:12:17.257 Transport SGL Data Block: Not Supported 00:12:17.257 Replay Protected Memory Block: Not Supported 00:12:17.257 00:12:17.257 Firmware Slot Information 00:12:17.257 ========================= 00:12:17.257 Active slot: 1 00:12:17.257 Slot 1 Firmware Revision: 1.0 00:12:17.257 00:12:17.257 00:12:17.257 Commands Supported and Effects 00:12:17.258 ============================== 00:12:17.258 Admin Commands 00:12:17.258 -------------- 00:12:17.258 Delete I/O Submission Queue (00h): Supported 00:12:17.258 Create I/O Submission Queue (01h): Supported 00:12:17.258 Get Log Page (02h): Supported 00:12:17.258 Delete I/O Completion Queue (04h): Supported 00:12:17.258 Create I/O Completion Queue (05h): Supported 00:12:17.258 Identify (06h): Supported 00:12:17.258 Abort (08h): Supported 00:12:17.258 Set Features (09h): Supported 00:12:17.258 Get Features (0Ah): Supported 00:12:17.258 Asynchronous Event Request (0Ch): Supported 00:12:17.258 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.258 Directive Send (19h): Supported 00:12:17.258 Directive Receive (1Ah): Supported 00:12:17.258 Virtualization Management (1Ch): Supported 00:12:17.258 Doorbell Buffer Config (7Ch): Supported 00:12:17.258 Format NVM (80h): Supported LBA-Change 00:12:17.258 I/O Commands 00:12:17.258 ------------ 00:12:17.258 Flush (00h): Supported LBA-Change 00:12:17.258 Write (01h): Supported LBA-Change 00:12:17.258 Read (02h): Supported 00:12:17.258 Compare (05h): Supported 00:12:17.258 Write Zeroes (08h): Supported LBA-Change 00:12:17.258 Dataset Management (09h): Supported LBA-Change 00:12:17.258 Unknown (0Ch): Supported 00:12:17.258 Unknown (12h): Supported 00:12:17.258 Copy (19h): Supported LBA-Change 00:12:17.258 Unknown (1Dh): Supported LBA-Change 00:12:17.258 00:12:17.258 Error Log 00:12:17.258 ========= 00:12:17.258 00:12:17.258 Arbitration 00:12:17.258 =========== 00:12:17.258 Arbitration Burst: no limit 00:12:17.258 00:12:17.258 Power Management 00:12:17.258 ================ 00:12:17.258 Number of Power States: 1 00:12:17.258 Current Power State: Power State #0 00:12:17.258 Power State #0: 00:12:17.258 Max Power: 25.00 W 00:12:17.258 Non-Operational State: Operational 00:12:17.258 Entry Latency: 16 microseconds 00:12:17.258 Exit Latency: 4 microseconds 00:12:17.258 Relative Read Throughput: 0 00:12:17.258 Relative Read Latency: 0 00:12:17.258 Relative Write Throughput: 0 00:12:17.258 Relative Write Latency: 0 00:12:17.258 Idle Power: Not Reported 00:12:17.258 Active Power: Not Reported 00:12:17.258 Non-Operational Permissive Mode: Not Supported 00:12:17.258 00:12:17.258 Health Information 00:12:17.258 ================== 00:12:17.258 Critical Warnings: 00:12:17.258 Available Spare Space: OK 00:12:17.258 Temperature: OK 00:12:17.258 Device Reliability: OK 00:12:17.258 Read Only: No 00:12:17.258 Volatile Memory Backup: OK 00:12:17.258 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.258 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.258 Available Spare: 0% 00:12:17.258 Available Spare Threshold: 0% 00:12:17.258 Life Percentage Used: 0% 00:12:17.258 Data Units Read: 1197 00:12:17.258 Data Units Written: 1064 00:12:17.258 Host Read Commands: 54776 00:12:17.258 Host Write Commands: 53561 00:12:17.258 Controller Busy Time: 0 minutes 00:12:17.258 Power Cycles: 0 00:12:17.258 Power On Hours: 0 hours 00:12:17.258 Unsafe Shutdowns: 0 00:12:17.258 Unrecoverable Media Errors: 0 00:12:17.258 Lifetime Error Log Entries: 0 00:12:17.258 Warning Temperature Time: 0 minutes 00:12:17.258 Critical Temperature Time: 0 minutes 00:12:17.258 00:12:17.258 Number of Queues 00:12:17.258 ================ 00:12:17.258 Number of I/O Submission Queues: 64 00:12:17.258 Number of I/O Completion Queues: 64 00:12:17.258 00:12:17.258 ZNS Specific Controller Data 00:12:17.258 ============================ 00:12:17.258 Zone Append Size Limit: 0 00:12:17.258 00:12:17.258 00:12:17.258 Active Namespaces 00:12:17.258 ================= 00:12:17.258 Namespace ID:1 00:12:17.258 Error Recovery Timeout: Unlimited 00:12:17.258 Command Set Identifier: NVM (00h) 00:12:17.258 Deallocate: Supported 00:12:17.258 Deallocated/Unwritten Error: Supported 00:12:17.258 Deallocated Read Value: All 0x00 00:12:17.258 Deallocate in Write Zeroes: Not Supported 00:12:17.258 Deallocated Guard Field: 0xFFFF 00:12:17.258 Flush: Supported 00:12:17.258 Reservation: Not Supported 00:12:17.258 Namespace Sharing Capabilities: Private 00:12:17.258 Size (in LBAs): 1310720 (5GiB) 00:12:17.258 Capacity (in LBAs): 1310720 (5GiB) 00:12:17.258 Utilization (in LBAs): 1310720 (5GiB) 00:12:17.258 Thin Provisioning: Not Supported 00:12:17.258 Per-NS Atomic Units: No 00:12:17.258 Maximum Single Source Range Length: 128 00:12:17.258 Maximum Copy Length: 128 00:12:17.258 Maximum Source Range Count: 128 00:12:17.258 NGUID/EUI64 Never Reused: No 00:12:17.258 Namespace Write Protected: No 00:12:17.258 Number of LBA Formats: 8 00:12:17.258 Current LBA Format: LBA Format #04 00:12:17.258 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.258 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.258 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.258 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.258 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.258 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.258 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.258 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.258 00:12:17.258 NVM Specific Namespace Data 00:12:17.258 =========================== 00:12:17.258 Logical Block Storage Tag Mask: 0 00:12:17.258 Protection Information Capabilities: 00:12:17.258 16b Guard Protection Information Storage Tag Support: No 00:12:17.258 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.258 Storage Tag Check Read Support: No 00:12:17.258 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.258 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.258 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.258 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.258 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.258 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.258 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.258 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.258 ===================================================== 00:12:17.258 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:17.258 ===================================================== 00:12:17.258 Controller Capabilities/Features 00:12:17.258 ================================ 00:12:17.258 Vendor ID: 1b36 00:12:17.258 Subsystem Vendor ID: 1af4 00:12:17.258 Serial Number: 12343 00:12:17.258 Model Number: QEMU NVMe Ctrl 00:12:17.258 Firmware Version: 8.0.0 00:12:17.258 Recommended Arb Burst: 6 00:12:17.258 IEEE OUI Identifier: 00 54 52 00:12:17.258 Multi-path I/O 00:12:17.258 May have multiple subsystem ports: No 00:12:17.258 May have multiple controllers: Yes 00:12:17.258 Associated with SR-IOV VF: No 00:12:17.258 Max Data Transfer Size: 524288 00:12:17.258 Max Number of Namespaces: 256 00:12:17.258 Max Number of I/O Queues: 64 00:12:17.258 NVMe Specification Version (VS): 1.4 00:12:17.258 NVMe Specification Version (Identify): 1.4 00:12:17.258 Maximum Queue Entries: 2048 00:12:17.258 Contiguous Queues Required: Yes 00:12:17.258 Arbitration Mechanisms Supported 00:12:17.258 Weighted Round Robin: Not Supported 00:12:17.258 Vendor Specific: Not Supported 00:12:17.258 Reset Timeout: 7500 ms 00:12:17.258 Doorbell Stride: 4 bytes 00:12:17.258 NVM Subsystem Reset: Not Supported 00:12:17.258 Command Sets Supported 00:12:17.258 NVM Command Set: Supported 00:12:17.258 Boot Partition: Not Supported 00:12:17.258 Memory Page Size Minimum: 4096 bytes 00:12:17.258 Memory Page Size Maximum: 65536 bytes 00:12:17.258 Persistent Memory Region: Not Supported 00:12:17.258 Optional Asynchronous Events Supported 00:12:17.258 Namespace Attribute Notices: Supported 00:12:17.258 Firmware Activation Notices: Not Supported 00:12:17.258 ANA Change Notices: Not Supported 00:12:17.258 PLE Aggregate Log Change Notices: Not Supported 00:12:17.258 LBA Status Info Alert Notices: Not Supported 00:12:17.258 EGE Aggregate Log Change Notices: Not Supported 00:12:17.258 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.258 Zone Descriptor Change Notices: Not Supported 00:12:17.258 Discovery Log Change Notices: Not Supported 00:12:17.258 Controller Attributes 00:12:17.258 128-bit Host Identifier: Not Supported 00:12:17.258 Non-Operational Permissive Mode: Not Supported 00:12:17.258 NVM Sets: Not Supported 00:12:17.258 Read Recovery Levels: Not Supported 00:12:17.258 Endurance Groups: Supported 00:12:17.258 Predictable Latency Mode: Not Supported 00:12:17.258 Traffic Based Keep ALive: Not Supported 00:12:17.258 Namespace Granularity: Not Supported 00:12:17.258 SQ Associations: Not Supported 00:12:17.258 UUID List: Not Supported 00:12:17.258 Multi-Domain Subsystem: Not Supported 00:12:17.259 Fixed Capacity Management: Not Supported 00:12:17.259 Variable Capacity Management: Not Supported 00:12:17.259 Delete Endurance Group: Not Supported 00:12:17.259 Delete NVM Set: Not Supported 00:12:17.259 Extended LBA Formats Supported: Supported 00:12:17.259 Flexible Data Placement Supported: Supported 00:12:17.259 00:12:17.259 Controller Memory Buffer Support 00:12:17.259 ================================ 00:12:17.259 Supported: No 00:12:17.259 00:12:17.259 Persistent Memory Region Support 00:12:17.259 ================================ 00:12:17.259 Supported: No 00:12:17.259 00:12:17.259 Admin Command Set Attributes 00:12:17.259 ============================ 00:12:17.259 Security Send/Receive: Not Supported 00:12:17.259 Format NVM: Supported 00:12:17.259 Firmware Activate/Download: Not Supported 00:12:17.259 Namespace Management: Supported 00:12:17.259 Device Self-Test: Not Supported 00:12:17.259 Directives: Supported 00:12:17.259 NVMe-MI: Not Supported 00:12:17.259 Virtualization Management: Not Supported 00:12:17.259 Doorbell Buffer Config: Supported 00:12:17.259 Get LBA Status Capability: Not Supported 00:12:17.259 Command & Feature Lockdown Capability: Not Supported 00:12:17.259 Abort Command Limit: 4 00:12:17.259 Async Event Request Limit: 4 00:12:17.259 Number of Firmware Slots: N/A 00:12:17.259 Firmware Slot 1 Read-Only: N/A 00:12:17.259 Firmware Activation Without Reset: N/A 00:12:17.259 Multiple Update Detection Support: N/A 00:12:17.259 Firmware Update Granularity: No Information Provided 00:12:17.259 Per-Namespace SMART Log: Yes 00:12:17.259 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.259 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:17.259 Command Effects Log Page: Supported 00:12:17.259 Get Log Page Extended Data: Supported 00:12:17.259 Telemetry Log Pages: Not Supported 00:12:17.259 Persistent Event Log Pages: Not Supported 00:12:17.259 Supported Log Pages Log Page: May Support 00:12:17.259 Commands Supported & Effects Log Page: Not Supported 00:12:17.259 Feature Identifiers & Effects Log Page:May Support 00:12:17.259 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.259 Data Area 4 for Telemetry Log: Not Supported 00:12:17.259 Error Log Page Entries Supported: 1 00:12:17.259 Keep Alive: Not Supported 00:12:17.259 00:12:17.259 NVM Command Set Attributes 00:12:17.259 ========================== 00:12:17.259 Submission Queue Entry Size 00:12:17.259 Max: 64 00:12:17.259 Min: 64 00:12:17.259 Completion Queue Entry Size 00:12:17.259 Max: 16 00:12:17.259 Min: 16 00:12:17.259 Number of Namespaces: 256 00:12:17.259 Compare Command: Supported 00:12:17.259 Write Uncorrectable Command: Not Supported 00:12:17.259 Dataset Management Command: Supported 00:12:17.259 Write Zeroes Command: Supported 00:12:17.259 Set Features Save Field: Supported 00:12:17.259 Reservations: Not Supported 00:12:17.259 Timestamp: Supported 00:12:17.259 Copy: Supported 00:12:17.259 Volatile Write Cache: Present 00:12:17.259 Atomic Write Unit (Normal): 1 00:12:17.259 Atomic Write Unit (PFail): 1 00:12:17.259 Atomic Compare & Write Unit: 1 00:12:17.259 Fused Compare & Write: Not Supported 00:12:17.259 Scatter-Gather List 00:12:17.259 SGL Command Set: Supported 00:12:17.259 SGL Keyed: Not Supported 00:12:17.259 SGL Bit Bucket Descriptor: Not Supported 00:12:17.259 SGL Metadata Pointer: Not Supported 00:12:17.259 Oversized SGL: Not Supported 00:12:17.259 SGL Metadata Address: Not Supported 00:12:17.259 SGL Offset: Not Supported 00:12:17.259 Transport SGL Data Block: Not Supported 00:12:17.259 Replay Protected Memory Block: Not Supported 00:12:17.259 00:12:17.259 Firmware Slot Information 00:12:17.259 ========================= 00:12:17.259 Active slot: 1 00:12:17.259 Slot 1 Firmware Revision: 1.0 00:12:17.259 00:12:17.259 00:12:17.259 Commands Supported and Effects 00:12:17.259 ============================== 00:12:17.259 Admin Commands 00:12:17.259 -------------- 00:12:17.259 Delete I/O Submission Queue (00h): Supported 00:12:17.259 Create I/O Submission Queue (01h): Supported 00:12:17.259 Get Log Page (02h): Supported 00:12:17.259 Delete I/O Completion Queue (04h): Supported 00:12:17.259 Create I/O Completion Queue (05h): Supported 00:12:17.259 Identify (06h): Supported 00:12:17.259 Abort (08h): Supported 00:12:17.259 Set Features (09h): Supported 00:12:17.259 Get Features (0Ah): Supported 00:12:17.259 Asynchronous Event Request (0Ch): Supported 00:12:17.259 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.259 Directive Send (19h): Supported 00:12:17.259 Directive Receive (1Ah): Supported 00:12:17.259 Virtualization Management (1Ch): Supported 00:12:17.259 Doorbell Buffer Config (7Ch): Supported 00:12:17.259 Format NVM (80h): Supported LBA-Change 00:12:17.259 I/O Commands 00:12:17.259 ------------ 00:12:17.259 Flush (00h): Supported LBA-Change 00:12:17.259 Write (01h): Supported LBA-Change 00:12:17.259 Read (02h): Supported 00:12:17.259 Compare (05h): Supported 00:12:17.259 Write Zeroes (08h): Supported LBA-Change 00:12:17.259 Dataset Management (09h): Supported LBA-Change 00:12:17.259 Unknown (0Ch): Supported 00:12:17.259 Unknown (12h): Supported 00:12:17.259 Copy (19h): Supported LBA-Change 00:12:17.259 Unknown (1Dh): Supported LBA-Change 00:12:17.259 00:12:17.259 Error Log 00:12:17.259 ========= 00:12:17.259 00:12:17.259 Arbitration 00:12:17.259 =========== 00:12:17.259 Arbitration Burst: no limit 00:12:17.259 00:12:17.259 Power Management 00:12:17.259 ================ 00:12:17.259 Number of Power States: 1 00:12:17.259 Current Power State: Power State #0 00:12:17.259 Power State #0: 00:12:17.259 Max Power: 25.00 W 00:12:17.259 Non-Operational State: Operational 00:12:17.259 Entry Latency: 16 microseconds 00:12:17.259 Exit Latency: 4 microseconds 00:12:17.259 Relative Read Throughput: 0 00:12:17.259 Relative Read Latency: 0 00:12:17.259 Relative Write Throughput: 0 00:12:17.259 Relative Write Latency: 0 00:12:17.259 Idle Power: Not Reported 00:12:17.259 Active Power: Not Reported 00:12:17.259 Non-Operational Permissive Mode: Not Supported 00:12:17.259 00:12:17.259 Health Information 00:12:17.259 ================== 00:12:17.259 Critical Warnings: 00:12:17.259 Available Spare Space: OK 00:12:17.259 Temperature: OK 00:12:17.259 Device Reliability: OK 00:12:17.259 Read Only: No 00:12:17.259 Volatile Memory Backup: OK 00:12:17.259 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.259 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.259 Available Spare: 0% 00:12:17.259 Available Spare Threshold: 0% 00:12:17.259 Life Percentage Used: [2024-11-25 10:17:24.253143] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64083 terminated unexpected 00:12:17.259 0% 00:12:17.259 Data Units Read: 883 00:12:17.259 Data Units Written: 812 00:12:17.259 Host Read Commands: 37939 00:12:17.259 Host Write Commands: 37362 00:12:17.259 Controller Busy Time: 0 minutes 00:12:17.259 Power Cycles: 0 00:12:17.259 Power On Hours: 0 hours 00:12:17.259 Unsafe Shutdowns: 0 00:12:17.259 Unrecoverable Media Errors: 0 00:12:17.259 Lifetime Error Log Entries: 0 00:12:17.259 Warning Temperature Time: 0 minutes 00:12:17.259 Critical Temperature Time: 0 minutes 00:12:17.259 00:12:17.259 Number of Queues 00:12:17.259 ================ 00:12:17.259 Number of I/O Submission Queues: 64 00:12:17.259 Number of I/O Completion Queues: 64 00:12:17.259 00:12:17.259 ZNS Specific Controller Data 00:12:17.259 ============================ 00:12:17.259 Zone Append Size Limit: 0 00:12:17.259 00:12:17.259 00:12:17.259 Active Namespaces 00:12:17.259 ================= 00:12:17.259 Namespace ID:1 00:12:17.259 Error Recovery Timeout: Unlimited 00:12:17.259 Command Set Identifier: NVM (00h) 00:12:17.259 Deallocate: Supported 00:12:17.259 Deallocated/Unwritten Error: Supported 00:12:17.259 Deallocated Read Value: All 0x00 00:12:17.259 Deallocate in Write Zeroes: Not Supported 00:12:17.259 Deallocated Guard Field: 0xFFFF 00:12:17.259 Flush: Supported 00:12:17.259 Reservation: Not Supported 00:12:17.259 Namespace Sharing Capabilities: Multiple Controllers 00:12:17.259 Size (in LBAs): 262144 (1GiB) 00:12:17.259 Capacity (in LBAs): 262144 (1GiB) 00:12:17.259 Utilization (in LBAs): 262144 (1GiB) 00:12:17.259 Thin Provisioning: Not Supported 00:12:17.259 Per-NS Atomic Units: No 00:12:17.259 Maximum Single Source Range Length: 128 00:12:17.259 Maximum Copy Length: 128 00:12:17.259 Maximum Source Range Count: 128 00:12:17.259 NGUID/EUI64 Never Reused: No 00:12:17.259 Namespace Write Protected: No 00:12:17.259 Endurance group ID: 1 00:12:17.259 Number of LBA Formats: 8 00:12:17.259 Current LBA Format: LBA Format #04 00:12:17.259 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.260 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.260 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.260 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.260 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.260 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.260 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.260 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.260 00:12:17.260 Get Feature FDP: 00:12:17.260 ================ 00:12:17.260 Enabled: Yes 00:12:17.260 FDP configuration index: 0 00:12:17.260 00:12:17.260 FDP configurations log page 00:12:17.260 =========================== 00:12:17.260 Number of FDP configurations: 1 00:12:17.260 Version: 0 00:12:17.260 Size: 112 00:12:17.260 FDP Configuration Descriptor: 0 00:12:17.260 Descriptor Size: 96 00:12:17.260 Reclaim Group Identifier format: 2 00:12:17.260 FDP Volatile Write Cache: Not Present 00:12:17.260 FDP Configuration: Valid 00:12:17.260 Vendor Specific Size: 0 00:12:17.260 Number of Reclaim Groups: 2 00:12:17.260 Number of Recalim Unit Handles: 8 00:12:17.260 Max Placement Identifiers: 128 00:12:17.260 Number of Namespaces Suppprted: 256 00:12:17.260 Reclaim unit Nominal Size: 6000000 bytes 00:12:17.260 Estimated Reclaim Unit Time Limit: Not Reported 00:12:17.260 RUH Desc #000: RUH Type: Initially Isolated 00:12:17.260 RUH Desc #001: RUH Type: Initially Isolated 00:12:17.260 RUH Desc #002: RUH Type: Initially Isolated 00:12:17.260 RUH Desc #003: RUH Type: Initially Isolated 00:12:17.260 RUH Desc #004: RUH Type: Initially Isolated 00:12:17.260 RUH Desc #005: RUH Type: Initially Isolated 00:12:17.260 RUH Desc #006: RUH Type: Initially Isolated 00:12:17.260 RUH Desc #007: RUH Type: Initially Isolated 00:12:17.260 00:12:17.260 FDP reclaim unit handle usage log page 00:12:17.260 ====================================== 00:12:17.260 Number of Reclaim Unit Handles: 8 00:12:17.260 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:17.260 RUH Usage Desc #001: RUH Attributes: Unused 00:12:17.260 RUH Usage Desc #002: RUH Attributes: Unused 00:12:17.260 RUH Usage Desc #003: RUH Attributes: Unused 00:12:17.260 RUH Usage Desc #004: RUH Attributes: Unused 00:12:17.260 RUH Usage Desc #005: RUH Attributes: Unused 00:12:17.260 RUH Usage Desc #006: RUH Attributes: Unused 00:12:17.260 RUH Usage Desc #007: RUH Attributes: Unused 00:12:17.260 00:12:17.260 FDP statistics log page 00:12:17.260 ======================= 00:12:17.260 Host bytes with metadata written: 520855552 00:12:17.260 Media bytes with metadata written: 520912896 00:12:17.260 Media bytes erased: 0 00:12:17.260 00:12:17.260 FDP events log page 00:12:17.260 =================== 00:12:17.260 Number of FDP events: 0 00:12:17.260 00:12:17.260 NVM Specific Namespace Data 00:12:17.260 =========================== 00:12:17.260 Logical Block Storage Tag Mask: 0 00:12:17.260 Protection Information Capabilities: 00:12:17.260 16b Guard Protection Information Storage Tag Support: No 00:12:17.260 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.260 Storage Tag Check Read Support: No 00:12:17.260 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.260 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.260 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.260 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.260 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.260 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.260 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.260 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.260 ===================================================== 00:12:17.260 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:17.260 ===================================================== 00:12:17.260 Controller Capabilities/Features 00:12:17.260 ================================ 00:12:17.260 Vendor ID: 1b36 00:12:17.260 Subsystem Vendor ID: 1af4 00:12:17.260 Serial Number: 12342 00:12:17.260 Model Number: QEMU NVMe Ctrl 00:12:17.260 Firmware Version: 8.0.0 00:12:17.260 Recommended Arb Burst: 6 00:12:17.260 IEEE OUI Identifier: 00 54 52 00:12:17.260 Multi-path I/O 00:12:17.260 May have multiple subsystem ports: No 00:12:17.260 May have multiple controllers: No 00:12:17.260 Associated with SR-IOV VF: No 00:12:17.260 Max Data Transfer Size: 524288 00:12:17.260 Max Number of Namespaces: 256 00:12:17.260 Max Number of I/O Queues: 64 00:12:17.260 NVMe Specification Version (VS): 1.4 00:12:17.260 NVMe Specification Version (Identify): 1.4 00:12:17.260 Maximum Queue Entries: 2048 00:12:17.260 Contiguous Queues Required: Yes 00:12:17.260 Arbitration Mechanisms Supported 00:12:17.260 Weighted Round Robin: Not Supported 00:12:17.260 Vendor Specific: Not Supported 00:12:17.260 Reset Timeout: 7500 ms 00:12:17.260 Doorbell Stride: 4 bytes 00:12:17.260 NVM Subsystem Reset: Not Supported 00:12:17.260 Command Sets Supported 00:12:17.260 NVM Command Set: Supported 00:12:17.260 Boot Partition: Not Supported 00:12:17.260 Memory Page Size Minimum: 4096 bytes 00:12:17.260 Memory Page Size Maximum: 65536 bytes 00:12:17.260 Persistent Memory Region: Not Supported 00:12:17.260 Optional Asynchronous Events Supported 00:12:17.260 Namespace Attribute Notices: Supported 00:12:17.260 Firmware Activation Notices: Not Supported 00:12:17.260 ANA Change Notices: Not Supported 00:12:17.260 PLE Aggregate Log Change Notices: Not Supported 00:12:17.260 LBA Status Info Alert Notices: Not Supported 00:12:17.260 EGE Aggregate Log Change Notices: Not Supported 00:12:17.260 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.260 Zone Descriptor Change Notices: Not Supported 00:12:17.260 Discovery Log Change Notices: Not Supported 00:12:17.260 Controller Attributes 00:12:17.260 128-bit Host Identifier: Not Supported 00:12:17.260 Non-Operational Permissive Mode: Not Supported 00:12:17.260 NVM Sets: Not Supported 00:12:17.260 Read Recovery Levels: Not Supported 00:12:17.260 Endurance Groups: Not Supported 00:12:17.260 Predictable Latency Mode: Not Supported 00:12:17.260 Traffic Based Keep ALive: Not Supported 00:12:17.260 Namespace Granularity: Not Supported 00:12:17.260 SQ Associations: Not Supported 00:12:17.260 UUID List: Not Supported 00:12:17.260 Multi-Domain Subsystem: Not Supported 00:12:17.260 Fixed Capacity Management: Not Supported 00:12:17.260 Variable Capacity Management: Not Supported 00:12:17.260 Delete Endurance Group: Not Supported 00:12:17.260 Delete NVM Set: Not Supported 00:12:17.260 Extended LBA Formats Supported: Supported 00:12:17.260 Flexible Data Placement Supported: Not Supported 00:12:17.260 00:12:17.260 Controller Memory Buffer Support 00:12:17.260 ================================ 00:12:17.260 Supported: No 00:12:17.260 00:12:17.260 Persistent Memory Region Support 00:12:17.260 ================================ 00:12:17.260 Supported: No 00:12:17.260 00:12:17.260 Admin Command Set Attributes 00:12:17.260 ============================ 00:12:17.260 Security Send/Receive: Not Supported 00:12:17.260 Format NVM: Supported 00:12:17.260 Firmware Activate/Download: Not Supported 00:12:17.260 Namespace Management: Supported 00:12:17.260 Device Self-Test: Not Supported 00:12:17.260 Directives: Supported 00:12:17.260 NVMe-MI: Not Supported 00:12:17.260 Virtualization Management: Not Supported 00:12:17.260 Doorbell Buffer Config: Supported 00:12:17.260 Get LBA Status Capability: Not Supported 00:12:17.260 Command & Feature Lockdown Capability: Not Supported 00:12:17.260 Abort Command Limit: 4 00:12:17.260 Async Event Request Limit: 4 00:12:17.260 Number of Firmware Slots: N/A 00:12:17.260 Firmware Slot 1 Read-Only: N/A 00:12:17.261 Firmware Activation Without Reset: N/A 00:12:17.261 Multiple Update Detection Support: N/A 00:12:17.261 Firmware Update Granularity: No Information Provided 00:12:17.261 Per-Namespace SMART Log: Yes 00:12:17.261 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.261 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:17.261 Command Effects Log Page: Supported 00:12:17.261 Get Log Page Extended Data: Supported 00:12:17.261 Telemetry Log Pages: Not Supported 00:12:17.261 Persistent Event Log Pages: Not Supported 00:12:17.261 Supported Log Pages Log Page: May Support 00:12:17.261 Commands Supported & Effects Log Page: Not Supported 00:12:17.261 Feature Identifiers & Effects Log Page:May Support 00:12:17.261 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.261 Data Area 4 for Telemetry Log: Not Supported 00:12:17.261 Error Log Page Entries Supported: 1 00:12:17.261 Keep Alive: Not Supported 00:12:17.261 00:12:17.261 NVM Command Set Attributes 00:12:17.261 ========================== 00:12:17.261 Submission Queue Entry Size 00:12:17.261 Max: 64 00:12:17.261 Min: 64 00:12:17.261 Completion Queue Entry Size 00:12:17.261 Max: 16 00:12:17.261 Min: 16 00:12:17.261 Number of Namespaces: 256 00:12:17.261 Compare Command: Supported 00:12:17.261 Write Uncorrectable Command: Not Supported 00:12:17.261 Dataset Management Command: Supported 00:12:17.261 Write Zeroes Command: Supported 00:12:17.261 Set Features Save Field: Supported 00:12:17.261 Reservations: Not Supported 00:12:17.261 Timestamp: Supported 00:12:17.261 Copy: Supported 00:12:17.261 Volatile Write Cache: Present 00:12:17.261 Atomic Write Unit (Normal): 1 00:12:17.261 Atomic Write Unit (PFail): 1 00:12:17.261 Atomic Compare & Write Unit: 1 00:12:17.261 Fused Compare & Write: Not Supported 00:12:17.261 Scatter-Gather List 00:12:17.261 SGL Command Set: Supported 00:12:17.261 SGL Keyed: Not Supported 00:12:17.261 SGL Bit Bucket Descriptor: Not Supported 00:12:17.261 SGL Metadata Pointer: Not Supported 00:12:17.261 Oversized SGL: Not Supported 00:12:17.261 SGL Metadata Address: Not Supported 00:12:17.261 SGL Offset: Not Supported 00:12:17.261 Transport SGL Data Block: Not Supported 00:12:17.261 Replay Protected Memory Block: Not Supported 00:12:17.261 00:12:17.261 Firmware Slot Information 00:12:17.261 ========================= 00:12:17.261 Active slot: 1 00:12:17.261 Slot 1 Firmware Revision: 1.0 00:12:17.261 00:12:17.261 00:12:17.261 Commands Supported and Effects 00:12:17.261 ============================== 00:12:17.261 Admin Commands 00:12:17.261 -------------- 00:12:17.261 Delete I/O Submission Queue (00h): Supported 00:12:17.261 Create I/O Submission Queue (01h): Supported 00:12:17.261 Get Log Page (02h): Supported 00:12:17.261 Delete I/O Completion Queue (04h): Supported 00:12:17.261 Create I/O Completion Queue (05h): Supported 00:12:17.261 Identify (06h): Supported 00:12:17.261 Abort (08h): Supported 00:12:17.261 Set Features (09h): Supported 00:12:17.261 Get Features (0Ah): Supported 00:12:17.261 Asynchronous Event Request (0Ch): Supported 00:12:17.261 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.261 Directive Send (19h): Supported 00:12:17.261 Directive Receive (1Ah): Supported 00:12:17.261 Virtualization Management (1Ch): Supported 00:12:17.261 Doorbell Buffer Config (7Ch): Supported 00:12:17.261 Format NVM (80h): Supported LBA-Change 00:12:17.261 I/O Commands 00:12:17.261 ------------ 00:12:17.261 Flush (00h): Supported LBA-Change 00:12:17.261 Write (01h): Supported LBA-Change 00:12:17.261 Read (02h): Supported 00:12:17.261 Compare (05h): Supported 00:12:17.261 Write Zeroes (08h): Supported LBA-Change 00:12:17.261 Dataset Management (09h): Supported LBA-Change 00:12:17.261 Unknown (0Ch): Supported 00:12:17.261 Unknown (12h): Supported 00:12:17.261 Copy (19h): Supported LBA-Change 00:12:17.261 Unknown (1Dh): Supported LBA-Change 00:12:17.261 00:12:17.261 Error Log 00:12:17.261 ========= 00:12:17.261 00:12:17.261 Arbitration 00:12:17.261 =========== 00:12:17.261 Arbitration Burst: no limit 00:12:17.261 00:12:17.261 Power Management 00:12:17.261 ================ 00:12:17.261 Number of Power States: 1 00:12:17.261 Current Power State: Power State #0 00:12:17.261 Power State #0: 00:12:17.261 Max Power: 25.00 W 00:12:17.261 Non-Operational State: Operational 00:12:17.261 Entry Latency: 16 microseconds 00:12:17.261 Exit Latency: 4 microseconds 00:12:17.261 Relative Read Throughput: 0 00:12:17.261 Relative Read Latency: 0 00:12:17.261 Relative Write Throughput: 0 00:12:17.261 Relative Write Latency: 0 00:12:17.261 Idle Power: Not Reported 00:12:17.261 Active Power: Not Reported 00:12:17.261 Non-Operational Permissive Mode: Not Supported 00:12:17.261 00:12:17.261 Health Information 00:12:17.261 ================== 00:12:17.261 Critical Warnings: 00:12:17.261 Available Spare Space: OK 00:12:17.261 Temperature: OK 00:12:17.261 Device Reliability: OK 00:12:17.261 Read Only: No 00:12:17.261 Volatile Memory Backup: OK 00:12:17.261 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.261 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.261 Available Spare: 0% 00:12:17.261 Available Spare Threshold: 0% 00:12:17.261 Life Percentage Used: 0% 00:12:17.261 Data Units Read: 2457 00:12:17.261 Data Units Written: 2244 00:12:17.261 Host Read Commands: 112077 00:12:17.261 Host Write Commands: 110346 00:12:17.261 Controller Busy Time: 0 minutes 00:12:17.261 Power Cycles: 0 00:12:17.261 Power On Hours: 0 hours 00:12:17.261 Unsafe Shutdowns: 0 00:12:17.261 Unrecoverable Media Errors: 0 00:12:17.261 Lifetime Error Log Entries: 0 00:12:17.261 Warning Temperature Time: 0 minutes 00:12:17.261 Critical Temperature Time: 0 minutes 00:12:17.261 00:12:17.261 Number of Queues 00:12:17.261 ================ 00:12:17.261 Number of I/O Submission Queues: 64 00:12:17.261 Number of I/O Completion Queues: 64 00:12:17.261 00:12:17.261 ZNS Specific Controller Data 00:12:17.261 ============================ 00:12:17.261 Zone Append Size Limit: 0 00:12:17.261 00:12:17.261 00:12:17.261 Active Namespaces 00:12:17.261 ================= 00:12:17.261 Namespace ID:1 00:12:17.261 Error Recovery Timeout: Unlimited 00:12:17.261 Command Set Identifier: NVM (00h) 00:12:17.261 Deallocate: Supported 00:12:17.261 Deallocated/Unwritten Error: Supported 00:12:17.261 Deallocated Read Value: All 0x00 00:12:17.261 Deallocate in Write Zeroes: Not Supported 00:12:17.261 Deallocated Guard Field: 0xFFFF 00:12:17.261 Flush: Supported 00:12:17.261 Reservation: Not Supported 00:12:17.261 Namespace Sharing Capabilities: Private 00:12:17.261 Size (in LBAs): 1048576 (4GiB) 00:12:17.261 Capacity (in LBAs): 1048576 (4GiB) 00:12:17.261 Utilization (in LBAs): 1048576 (4GiB) 00:12:17.261 Thin Provisioning: Not Supported 00:12:17.261 Per-NS Atomic Units: No 00:12:17.261 Maximum Single Source Range Length: 128 00:12:17.261 Maximum Copy Length: 128 00:12:17.261 Maximum Source Range Count: 128 00:12:17.261 NGUID/EUI64 Never Reused: No 00:12:17.261 Namespace Write Protected: No 00:12:17.261 Number of LBA Formats: 8 00:12:17.261 Current LBA Format: LBA Format #04 00:12:17.261 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.261 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.261 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.261 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.261 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.261 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.261 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.261 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.261 00:12:17.261 NVM Specific Namespace Data 00:12:17.261 =========================== 00:12:17.261 Logical Block Storage Tag Mask: 0 00:12:17.261 Protection Information Capabilities: 00:12:17.261 [2024-11-25 10:17:24.255068] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64083 terminated unexpected 00:12:17.261 16b Guard Protection Information Storage Tag Support: No 00:12:17.261 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.261 Storage Tag Check Read Support: No 00:12:17.261 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.261 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.261 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.261 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.261 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.261 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.261 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.261 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.261 Namespace ID:2 00:12:17.261 Error Recovery Timeout: Unlimited 00:12:17.262 Command Set Identifier: NVM (00h) 00:12:17.262 Deallocate: Supported 00:12:17.262 Deallocated/Unwritten Error: Supported 00:12:17.262 Deallocated Read Value: All 0x00 00:12:17.262 Deallocate in Write Zeroes: Not Supported 00:12:17.262 Deallocated Guard Field: 0xFFFF 00:12:17.262 Flush: Supported 00:12:17.262 Reservation: Not Supported 00:12:17.262 Namespace Sharing Capabilities: Private 00:12:17.262 Size (in LBAs): 1048576 (4GiB) 00:12:17.262 Capacity (in LBAs): 1048576 (4GiB) 00:12:17.262 Utilization (in LBAs): 1048576 (4GiB) 00:12:17.262 Thin Provisioning: Not Supported 00:12:17.262 Per-NS Atomic Units: No 00:12:17.262 Maximum Single Source Range Length: 128 00:12:17.262 Maximum Copy Length: 128 00:12:17.262 Maximum Source Range Count: 128 00:12:17.262 NGUID/EUI64 Never Reused: No 00:12:17.262 Namespace Write Protected: No 00:12:17.262 Number of LBA Formats: 8 00:12:17.262 Current LBA Format: LBA Format #04 00:12:17.262 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.262 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.262 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.262 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.262 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.262 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.262 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.262 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.262 00:12:17.262 NVM Specific Namespace Data 00:12:17.262 =========================== 00:12:17.262 Logical Block Storage Tag Mask: 0 00:12:17.262 Protection Information Capabilities: 00:12:17.262 16b Guard Protection Information Storage Tag Support: No 00:12:17.262 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.262 Storage Tag Check Read Support: No 00:12:17.262 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Namespace ID:3 00:12:17.262 Error Recovery Timeout: Unlimited 00:12:17.262 Command Set Identifier: NVM (00h) 00:12:17.262 Deallocate: Supported 00:12:17.262 Deallocated/Unwritten Error: Supported 00:12:17.262 Deallocated Read Value: All 0x00 00:12:17.262 Deallocate in Write Zeroes: Not Supported 00:12:17.262 Deallocated Guard Field: 0xFFFF 00:12:17.262 Flush: Supported 00:12:17.262 Reservation: Not Supported 00:12:17.262 Namespace Sharing Capabilities: Private 00:12:17.262 Size (in LBAs): 1048576 (4GiB) 00:12:17.262 Capacity (in LBAs): 1048576 (4GiB) 00:12:17.262 Utilization (in LBAs): 1048576 (4GiB) 00:12:17.262 Thin Provisioning: Not Supported 00:12:17.262 Per-NS Atomic Units: No 00:12:17.262 Maximum Single Source Range Length: 128 00:12:17.262 Maximum Copy Length: 128 00:12:17.262 Maximum Source Range Count: 128 00:12:17.262 NGUID/EUI64 Never Reused: No 00:12:17.262 Namespace Write Protected: No 00:12:17.262 Number of LBA Formats: 8 00:12:17.262 Current LBA Format: LBA Format #04 00:12:17.262 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.262 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.262 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.262 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.262 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.262 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.262 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.262 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.262 00:12:17.262 NVM Specific Namespace Data 00:12:17.262 =========================== 00:12:17.262 Logical Block Storage Tag Mask: 0 00:12:17.262 Protection Information Capabilities: 00:12:17.262 16b Guard Protection Information Storage Tag Support: No 00:12:17.262 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.262 Storage Tag Check Read Support: No 00:12:17.262 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.262 10:17:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:17.262 10:17:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:12:17.521 ===================================================== 00:12:17.521 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:17.521 ===================================================== 00:12:17.521 Controller Capabilities/Features 00:12:17.521 ================================ 00:12:17.521 Vendor ID: 1b36 00:12:17.521 Subsystem Vendor ID: 1af4 00:12:17.521 Serial Number: 12340 00:12:17.521 Model Number: QEMU NVMe Ctrl 00:12:17.521 Firmware Version: 8.0.0 00:12:17.521 Recommended Arb Burst: 6 00:12:17.521 IEEE OUI Identifier: 00 54 52 00:12:17.521 Multi-path I/O 00:12:17.521 May have multiple subsystem ports: No 00:12:17.521 May have multiple controllers: No 00:12:17.521 Associated with SR-IOV VF: No 00:12:17.521 Max Data Transfer Size: 524288 00:12:17.521 Max Number of Namespaces: 256 00:12:17.521 Max Number of I/O Queues: 64 00:12:17.521 NVMe Specification Version (VS): 1.4 00:12:17.521 NVMe Specification Version (Identify): 1.4 00:12:17.521 Maximum Queue Entries: 2048 00:12:17.521 Contiguous Queues Required: Yes 00:12:17.521 Arbitration Mechanisms Supported 00:12:17.521 Weighted Round Robin: Not Supported 00:12:17.521 Vendor Specific: Not Supported 00:12:17.521 Reset Timeout: 7500 ms 00:12:17.521 Doorbell Stride: 4 bytes 00:12:17.521 NVM Subsystem Reset: Not Supported 00:12:17.521 Command Sets Supported 00:12:17.521 NVM Command Set: Supported 00:12:17.521 Boot Partition: Not Supported 00:12:17.521 Memory Page Size Minimum: 4096 bytes 00:12:17.521 Memory Page Size Maximum: 65536 bytes 00:12:17.521 Persistent Memory Region: Not Supported 00:12:17.521 Optional Asynchronous Events Supported 00:12:17.521 Namespace Attribute Notices: Supported 00:12:17.521 Firmware Activation Notices: Not Supported 00:12:17.521 ANA Change Notices: Not Supported 00:12:17.521 PLE Aggregate Log Change Notices: Not Supported 00:12:17.521 LBA Status Info Alert Notices: Not Supported 00:12:17.521 EGE Aggregate Log Change Notices: Not Supported 00:12:17.521 Normal NVM Subsystem Shutdown event: Not Supported 00:12:17.521 Zone Descriptor Change Notices: Not Supported 00:12:17.521 Discovery Log Change Notices: Not Supported 00:12:17.521 Controller Attributes 00:12:17.521 128-bit Host Identifier: Not Supported 00:12:17.521 Non-Operational Permissive Mode: Not Supported 00:12:17.521 NVM Sets: Not Supported 00:12:17.521 Read Recovery Levels: Not Supported 00:12:17.521 Endurance Groups: Not Supported 00:12:17.521 Predictable Latency Mode: Not Supported 00:12:17.521 Traffic Based Keep ALive: Not Supported 00:12:17.521 Namespace Granularity: Not Supported 00:12:17.521 SQ Associations: Not Supported 00:12:17.521 UUID List: Not Supported 00:12:17.521 Multi-Domain Subsystem: Not Supported 00:12:17.521 Fixed Capacity Management: Not Supported 00:12:17.521 Variable Capacity Management: Not Supported 00:12:17.521 Delete Endurance Group: Not Supported 00:12:17.522 Delete NVM Set: Not Supported 00:12:17.522 Extended LBA Formats Supported: Supported 00:12:17.522 Flexible Data Placement Supported: Not Supported 00:12:17.522 00:12:17.522 Controller Memory Buffer Support 00:12:17.522 ================================ 00:12:17.522 Supported: No 00:12:17.522 00:12:17.522 Persistent Memory Region Support 00:12:17.522 ================================ 00:12:17.522 Supported: No 00:12:17.522 00:12:17.522 Admin Command Set Attributes 00:12:17.522 ============================ 00:12:17.522 Security Send/Receive: Not Supported 00:12:17.522 Format NVM: Supported 00:12:17.522 Firmware Activate/Download: Not Supported 00:12:17.522 Namespace Management: Supported 00:12:17.522 Device Self-Test: Not Supported 00:12:17.522 Directives: Supported 00:12:17.522 NVMe-MI: Not Supported 00:12:17.522 Virtualization Management: Not Supported 00:12:17.522 Doorbell Buffer Config: Supported 00:12:17.522 Get LBA Status Capability: Not Supported 00:12:17.522 Command & Feature Lockdown Capability: Not Supported 00:12:17.522 Abort Command Limit: 4 00:12:17.522 Async Event Request Limit: 4 00:12:17.522 Number of Firmware Slots: N/A 00:12:17.522 Firmware Slot 1 Read-Only: N/A 00:12:17.522 Firmware Activation Without Reset: N/A 00:12:17.522 Multiple Update Detection Support: N/A 00:12:17.522 Firmware Update Granularity: No Information Provided 00:12:17.522 Per-Namespace SMART Log: Yes 00:12:17.522 Asymmetric Namespace Access Log Page: Not Supported 00:12:17.522 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:17.522 Command Effects Log Page: Supported 00:12:17.522 Get Log Page Extended Data: Supported 00:12:17.522 Telemetry Log Pages: Not Supported 00:12:17.522 Persistent Event Log Pages: Not Supported 00:12:17.522 Supported Log Pages Log Page: May Support 00:12:17.522 Commands Supported & Effects Log Page: Not Supported 00:12:17.522 Feature Identifiers & Effects Log Page:May Support 00:12:17.522 NVMe-MI Commands & Effects Log Page: May Support 00:12:17.522 Data Area 4 for Telemetry Log: Not Supported 00:12:17.522 Error Log Page Entries Supported: 1 00:12:17.522 Keep Alive: Not Supported 00:12:17.522 00:12:17.522 NVM Command Set Attributes 00:12:17.522 ========================== 00:12:17.522 Submission Queue Entry Size 00:12:17.522 Max: 64 00:12:17.522 Min: 64 00:12:17.522 Completion Queue Entry Size 00:12:17.522 Max: 16 00:12:17.522 Min: 16 00:12:17.522 Number of Namespaces: 256 00:12:17.522 Compare Command: Supported 00:12:17.522 Write Uncorrectable Command: Not Supported 00:12:17.522 Dataset Management Command: Supported 00:12:17.522 Write Zeroes Command: Supported 00:12:17.522 Set Features Save Field: Supported 00:12:17.522 Reservations: Not Supported 00:12:17.522 Timestamp: Supported 00:12:17.522 Copy: Supported 00:12:17.522 Volatile Write Cache: Present 00:12:17.522 Atomic Write Unit (Normal): 1 00:12:17.522 Atomic Write Unit (PFail): 1 00:12:17.522 Atomic Compare & Write Unit: 1 00:12:17.522 Fused Compare & Write: Not Supported 00:12:17.522 Scatter-Gather List 00:12:17.522 SGL Command Set: Supported 00:12:17.522 SGL Keyed: Not Supported 00:12:17.522 SGL Bit Bucket Descriptor: Not Supported 00:12:17.522 SGL Metadata Pointer: Not Supported 00:12:17.522 Oversized SGL: Not Supported 00:12:17.522 SGL Metadata Address: Not Supported 00:12:17.522 SGL Offset: Not Supported 00:12:17.522 Transport SGL Data Block: Not Supported 00:12:17.522 Replay Protected Memory Block: Not Supported 00:12:17.522 00:12:17.522 Firmware Slot Information 00:12:17.522 ========================= 00:12:17.522 Active slot: 1 00:12:17.522 Slot 1 Firmware Revision: 1.0 00:12:17.522 00:12:17.522 00:12:17.522 Commands Supported and Effects 00:12:17.522 ============================== 00:12:17.522 Admin Commands 00:12:17.522 -------------- 00:12:17.522 Delete I/O Submission Queue (00h): Supported 00:12:17.522 Create I/O Submission Queue (01h): Supported 00:12:17.522 Get Log Page (02h): Supported 00:12:17.522 Delete I/O Completion Queue (04h): Supported 00:12:17.522 Create I/O Completion Queue (05h): Supported 00:12:17.522 Identify (06h): Supported 00:12:17.522 Abort (08h): Supported 00:12:17.522 Set Features (09h): Supported 00:12:17.522 Get Features (0Ah): Supported 00:12:17.522 Asynchronous Event Request (0Ch): Supported 00:12:17.522 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:17.522 Directive Send (19h): Supported 00:12:17.522 Directive Receive (1Ah): Supported 00:12:17.522 Virtualization Management (1Ch): Supported 00:12:17.522 Doorbell Buffer Config (7Ch): Supported 00:12:17.522 Format NVM (80h): Supported LBA-Change 00:12:17.522 I/O Commands 00:12:17.522 ------------ 00:12:17.522 Flush (00h): Supported LBA-Change 00:12:17.522 Write (01h): Supported LBA-Change 00:12:17.522 Read (02h): Supported 00:12:17.522 Compare (05h): Supported 00:12:17.522 Write Zeroes (08h): Supported LBA-Change 00:12:17.522 Dataset Management (09h): Supported LBA-Change 00:12:17.522 Unknown (0Ch): Supported 00:12:17.522 Unknown (12h): Supported 00:12:17.522 Copy (19h): Supported LBA-Change 00:12:17.522 Unknown (1Dh): Supported LBA-Change 00:12:17.522 00:12:17.522 Error Log 00:12:17.522 ========= 00:12:17.522 00:12:17.522 Arbitration 00:12:17.522 =========== 00:12:17.522 Arbitration Burst: no limit 00:12:17.522 00:12:17.522 Power Management 00:12:17.522 ================ 00:12:17.522 Number of Power States: 1 00:12:17.522 Current Power State: Power State #0 00:12:17.522 Power State #0: 00:12:17.522 Max Power: 25.00 W 00:12:17.522 Non-Operational State: Operational 00:12:17.522 Entry Latency: 16 microseconds 00:12:17.522 Exit Latency: 4 microseconds 00:12:17.522 Relative Read Throughput: 0 00:12:17.522 Relative Read Latency: 0 00:12:17.522 Relative Write Throughput: 0 00:12:17.522 Relative Write Latency: 0 00:12:17.522 Idle Power: Not Reported 00:12:17.522 Active Power: Not Reported 00:12:17.522 Non-Operational Permissive Mode: Not Supported 00:12:17.522 00:12:17.522 Health Information 00:12:17.522 ================== 00:12:17.522 Critical Warnings: 00:12:17.522 Available Spare Space: OK 00:12:17.522 Temperature: OK 00:12:17.522 Device Reliability: OK 00:12:17.522 Read Only: No 00:12:17.522 Volatile Memory Backup: OK 00:12:17.522 Current Temperature: 323 Kelvin (50 Celsius) 00:12:17.522 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:17.522 Available Spare: 0% 00:12:17.522 Available Spare Threshold: 0% 00:12:17.522 Life Percentage Used: 0% 00:12:17.522 Data Units Read: 774 00:12:17.522 Data Units Written: 702 00:12:17.522 Host Read Commands: 36662 00:12:17.522 Host Write Commands: 36448 00:12:17.522 Controller Busy Time: 0 minutes 00:12:17.522 Power Cycles: 0 00:12:17.522 Power On Hours: 0 hours 00:12:17.522 Unsafe Shutdowns: 0 00:12:17.522 Unrecoverable Media Errors: 0 00:12:17.522 Lifetime Error Log Entries: 0 00:12:17.522 Warning Temperature Time: 0 minutes 00:12:17.522 Critical Temperature Time: 0 minutes 00:12:17.522 00:12:17.522 Number of Queues 00:12:17.522 ================ 00:12:17.522 Number of I/O Submission Queues: 64 00:12:17.522 Number of I/O Completion Queues: 64 00:12:17.522 00:12:17.522 ZNS Specific Controller Data 00:12:17.522 ============================ 00:12:17.522 Zone Append Size Limit: 0 00:12:17.522 00:12:17.522 00:12:17.522 Active Namespaces 00:12:17.522 ================= 00:12:17.522 Namespace ID:1 00:12:17.522 Error Recovery Timeout: Unlimited 00:12:17.522 Command Set Identifier: NVM (00h) 00:12:17.522 Deallocate: Supported 00:12:17.522 Deallocated/Unwritten Error: Supported 00:12:17.522 Deallocated Read Value: All 0x00 00:12:17.522 Deallocate in Write Zeroes: Not Supported 00:12:17.522 Deallocated Guard Field: 0xFFFF 00:12:17.522 Flush: Supported 00:12:17.522 Reservation: Not Supported 00:12:17.522 Metadata Transferred as: Separate Metadata Buffer 00:12:17.522 Namespace Sharing Capabilities: Private 00:12:17.522 Size (in LBAs): 1548666 (5GiB) 00:12:17.522 Capacity (in LBAs): 1548666 (5GiB) 00:12:17.522 Utilization (in LBAs): 1548666 (5GiB) 00:12:17.522 Thin Provisioning: Not Supported 00:12:17.522 Per-NS Atomic Units: No 00:12:17.522 Maximum Single Source Range Length: 128 00:12:17.522 Maximum Copy Length: 128 00:12:17.522 Maximum Source Range Count: 128 00:12:17.522 NGUID/EUI64 Never Reused: No 00:12:17.522 Namespace Write Protected: No 00:12:17.522 Number of LBA Formats: 8 00:12:17.522 Current LBA Format: LBA Format #07 00:12:17.523 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:17.523 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:17.523 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:17.523 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:17.523 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:17.523 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:17.523 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:17.523 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:17.523 00:12:17.523 NVM Specific Namespace Data 00:12:17.523 =========================== 00:12:17.523 Logical Block Storage Tag Mask: 0 00:12:17.523 Protection Information Capabilities: 00:12:17.523 16b Guard Protection Information Storage Tag Support: No 00:12:17.523 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:17.523 Storage Tag Check Read Support: No 00:12:17.523 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.523 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.523 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.523 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.523 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.523 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.523 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.523 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:17.782 10:17:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:17.782 10:17:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:12:18.041 ===================================================== 00:12:18.041 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:18.041 ===================================================== 00:12:18.041 Controller Capabilities/Features 00:12:18.041 ================================ 00:12:18.041 Vendor ID: 1b36 00:12:18.041 Subsystem Vendor ID: 1af4 00:12:18.041 Serial Number: 12341 00:12:18.041 Model Number: QEMU NVMe Ctrl 00:12:18.041 Firmware Version: 8.0.0 00:12:18.041 Recommended Arb Burst: 6 00:12:18.041 IEEE OUI Identifier: 00 54 52 00:12:18.041 Multi-path I/O 00:12:18.041 May have multiple subsystem ports: No 00:12:18.041 May have multiple controllers: No 00:12:18.042 Associated with SR-IOV VF: No 00:12:18.042 Max Data Transfer Size: 524288 00:12:18.042 Max Number of Namespaces: 256 00:12:18.042 Max Number of I/O Queues: 64 00:12:18.042 NVMe Specification Version (VS): 1.4 00:12:18.042 NVMe Specification Version (Identify): 1.4 00:12:18.042 Maximum Queue Entries: 2048 00:12:18.042 Contiguous Queues Required: Yes 00:12:18.042 Arbitration Mechanisms Supported 00:12:18.042 Weighted Round Robin: Not Supported 00:12:18.042 Vendor Specific: Not Supported 00:12:18.042 Reset Timeout: 7500 ms 00:12:18.042 Doorbell Stride: 4 bytes 00:12:18.042 NVM Subsystem Reset: Not Supported 00:12:18.042 Command Sets Supported 00:12:18.042 NVM Command Set: Supported 00:12:18.042 Boot Partition: Not Supported 00:12:18.042 Memory Page Size Minimum: 4096 bytes 00:12:18.042 Memory Page Size Maximum: 65536 bytes 00:12:18.042 Persistent Memory Region: Not Supported 00:12:18.042 Optional Asynchronous Events Supported 00:12:18.042 Namespace Attribute Notices: Supported 00:12:18.042 Firmware Activation Notices: Not Supported 00:12:18.042 ANA Change Notices: Not Supported 00:12:18.042 PLE Aggregate Log Change Notices: Not Supported 00:12:18.042 LBA Status Info Alert Notices: Not Supported 00:12:18.042 EGE Aggregate Log Change Notices: Not Supported 00:12:18.042 Normal NVM Subsystem Shutdown event: Not Supported 00:12:18.042 Zone Descriptor Change Notices: Not Supported 00:12:18.042 Discovery Log Change Notices: Not Supported 00:12:18.042 Controller Attributes 00:12:18.042 128-bit Host Identifier: Not Supported 00:12:18.042 Non-Operational Permissive Mode: Not Supported 00:12:18.042 NVM Sets: Not Supported 00:12:18.042 Read Recovery Levels: Not Supported 00:12:18.042 Endurance Groups: Not Supported 00:12:18.042 Predictable Latency Mode: Not Supported 00:12:18.042 Traffic Based Keep ALive: Not Supported 00:12:18.042 Namespace Granularity: Not Supported 00:12:18.042 SQ Associations: Not Supported 00:12:18.042 UUID List: Not Supported 00:12:18.042 Multi-Domain Subsystem: Not Supported 00:12:18.042 Fixed Capacity Management: Not Supported 00:12:18.042 Variable Capacity Management: Not Supported 00:12:18.042 Delete Endurance Group: Not Supported 00:12:18.042 Delete NVM Set: Not Supported 00:12:18.042 Extended LBA Formats Supported: Supported 00:12:18.042 Flexible Data Placement Supported: Not Supported 00:12:18.042 00:12:18.042 Controller Memory Buffer Support 00:12:18.042 ================================ 00:12:18.042 Supported: No 00:12:18.042 00:12:18.042 Persistent Memory Region Support 00:12:18.042 ================================ 00:12:18.042 Supported: No 00:12:18.042 00:12:18.042 Admin Command Set Attributes 00:12:18.042 ============================ 00:12:18.042 Security Send/Receive: Not Supported 00:12:18.042 Format NVM: Supported 00:12:18.042 Firmware Activate/Download: Not Supported 00:12:18.042 Namespace Management: Supported 00:12:18.042 Device Self-Test: Not Supported 00:12:18.042 Directives: Supported 00:12:18.042 NVMe-MI: Not Supported 00:12:18.042 Virtualization Management: Not Supported 00:12:18.042 Doorbell Buffer Config: Supported 00:12:18.042 Get LBA Status Capability: Not Supported 00:12:18.042 Command & Feature Lockdown Capability: Not Supported 00:12:18.042 Abort Command Limit: 4 00:12:18.042 Async Event Request Limit: 4 00:12:18.042 Number of Firmware Slots: N/A 00:12:18.042 Firmware Slot 1 Read-Only: N/A 00:12:18.042 Firmware Activation Without Reset: N/A 00:12:18.042 Multiple Update Detection Support: N/A 00:12:18.042 Firmware Update Granularity: No Information Provided 00:12:18.042 Per-Namespace SMART Log: Yes 00:12:18.042 Asymmetric Namespace Access Log Page: Not Supported 00:12:18.042 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:18.042 Command Effects Log Page: Supported 00:12:18.042 Get Log Page Extended Data: Supported 00:12:18.042 Telemetry Log Pages: Not Supported 00:12:18.042 Persistent Event Log Pages: Not Supported 00:12:18.042 Supported Log Pages Log Page: May Support 00:12:18.042 Commands Supported & Effects Log Page: Not Supported 00:12:18.042 Feature Identifiers & Effects Log Page:May Support 00:12:18.042 NVMe-MI Commands & Effects Log Page: May Support 00:12:18.042 Data Area 4 for Telemetry Log: Not Supported 00:12:18.042 Error Log Page Entries Supported: 1 00:12:18.042 Keep Alive: Not Supported 00:12:18.042 00:12:18.042 NVM Command Set Attributes 00:12:18.042 ========================== 00:12:18.042 Submission Queue Entry Size 00:12:18.042 Max: 64 00:12:18.042 Min: 64 00:12:18.042 Completion Queue Entry Size 00:12:18.042 Max: 16 00:12:18.042 Min: 16 00:12:18.042 Number of Namespaces: 256 00:12:18.042 Compare Command: Supported 00:12:18.042 Write Uncorrectable Command: Not Supported 00:12:18.042 Dataset Management Command: Supported 00:12:18.042 Write Zeroes Command: Supported 00:12:18.042 Set Features Save Field: Supported 00:12:18.042 Reservations: Not Supported 00:12:18.042 Timestamp: Supported 00:12:18.042 Copy: Supported 00:12:18.042 Volatile Write Cache: Present 00:12:18.042 Atomic Write Unit (Normal): 1 00:12:18.042 Atomic Write Unit (PFail): 1 00:12:18.042 Atomic Compare & Write Unit: 1 00:12:18.042 Fused Compare & Write: Not Supported 00:12:18.042 Scatter-Gather List 00:12:18.042 SGL Command Set: Supported 00:12:18.042 SGL Keyed: Not Supported 00:12:18.042 SGL Bit Bucket Descriptor: Not Supported 00:12:18.042 SGL Metadata Pointer: Not Supported 00:12:18.042 Oversized SGL: Not Supported 00:12:18.042 SGL Metadata Address: Not Supported 00:12:18.042 SGL Offset: Not Supported 00:12:18.042 Transport SGL Data Block: Not Supported 00:12:18.042 Replay Protected Memory Block: Not Supported 00:12:18.042 00:12:18.042 Firmware Slot Information 00:12:18.042 ========================= 00:12:18.042 Active slot: 1 00:12:18.042 Slot 1 Firmware Revision: 1.0 00:12:18.042 00:12:18.042 00:12:18.042 Commands Supported and Effects 00:12:18.042 ============================== 00:12:18.042 Admin Commands 00:12:18.042 -------------- 00:12:18.042 Delete I/O Submission Queue (00h): Supported 00:12:18.042 Create I/O Submission Queue (01h): Supported 00:12:18.042 Get Log Page (02h): Supported 00:12:18.042 Delete I/O Completion Queue (04h): Supported 00:12:18.042 Create I/O Completion Queue (05h): Supported 00:12:18.042 Identify (06h): Supported 00:12:18.042 Abort (08h): Supported 00:12:18.042 Set Features (09h): Supported 00:12:18.042 Get Features (0Ah): Supported 00:12:18.042 Asynchronous Event Request (0Ch): Supported 00:12:18.042 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:18.042 Directive Send (19h): Supported 00:12:18.042 Directive Receive (1Ah): Supported 00:12:18.042 Virtualization Management (1Ch): Supported 00:12:18.042 Doorbell Buffer Config (7Ch): Supported 00:12:18.042 Format NVM (80h): Supported LBA-Change 00:12:18.042 I/O Commands 00:12:18.042 ------------ 00:12:18.042 Flush (00h): Supported LBA-Change 00:12:18.042 Write (01h): Supported LBA-Change 00:12:18.042 Read (02h): Supported 00:12:18.042 Compare (05h): Supported 00:12:18.042 Write Zeroes (08h): Supported LBA-Change 00:12:18.042 Dataset Management (09h): Supported LBA-Change 00:12:18.042 Unknown (0Ch): Supported 00:12:18.042 Unknown (12h): Supported 00:12:18.042 Copy (19h): Supported LBA-Change 00:12:18.042 Unknown (1Dh): Supported LBA-Change 00:12:18.042 00:12:18.042 Error Log 00:12:18.042 ========= 00:12:18.042 00:12:18.042 Arbitration 00:12:18.042 =========== 00:12:18.042 Arbitration Burst: no limit 00:12:18.042 00:12:18.042 Power Management 00:12:18.042 ================ 00:12:18.042 Number of Power States: 1 00:12:18.042 Current Power State: Power State #0 00:12:18.043 Power State #0: 00:12:18.043 Max Power: 25.00 W 00:12:18.043 Non-Operational State: Operational 00:12:18.043 Entry Latency: 16 microseconds 00:12:18.043 Exit Latency: 4 microseconds 00:12:18.043 Relative Read Throughput: 0 00:12:18.043 Relative Read Latency: 0 00:12:18.043 Relative Write Throughput: 0 00:12:18.043 Relative Write Latency: 0 00:12:18.043 Idle Power: Not Reported 00:12:18.043 Active Power: Not Reported 00:12:18.043 Non-Operational Permissive Mode: Not Supported 00:12:18.043 00:12:18.043 Health Information 00:12:18.043 ================== 00:12:18.043 Critical Warnings: 00:12:18.043 Available Spare Space: OK 00:12:18.043 Temperature: OK 00:12:18.043 Device Reliability: OK 00:12:18.043 Read Only: No 00:12:18.043 Volatile Memory Backup: OK 00:12:18.043 Current Temperature: 323 Kelvin (50 Celsius) 00:12:18.043 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:18.043 Available Spare: 0% 00:12:18.043 Available Spare Threshold: 0% 00:12:18.043 Life Percentage Used: 0% 00:12:18.043 Data Units Read: 1197 00:12:18.043 Data Units Written: 1064 00:12:18.043 Host Read Commands: 54776 00:12:18.043 Host Write Commands: 53561 00:12:18.043 Controller Busy Time: 0 minutes 00:12:18.043 Power Cycles: 0 00:12:18.043 Power On Hours: 0 hours 00:12:18.043 Unsafe Shutdowns: 0 00:12:18.043 Unrecoverable Media Errors: 0 00:12:18.043 Lifetime Error Log Entries: 0 00:12:18.043 Warning Temperature Time: 0 minutes 00:12:18.043 Critical Temperature Time: 0 minutes 00:12:18.043 00:12:18.043 Number of Queues 00:12:18.043 ================ 00:12:18.043 Number of I/O Submission Queues: 64 00:12:18.043 Number of I/O Completion Queues: 64 00:12:18.043 00:12:18.043 ZNS Specific Controller Data 00:12:18.043 ============================ 00:12:18.043 Zone Append Size Limit: 0 00:12:18.043 00:12:18.043 00:12:18.043 Active Namespaces 00:12:18.043 ================= 00:12:18.043 Namespace ID:1 00:12:18.043 Error Recovery Timeout: Unlimited 00:12:18.043 Command Set Identifier: NVM (00h) 00:12:18.043 Deallocate: Supported 00:12:18.043 Deallocated/Unwritten Error: Supported 00:12:18.043 Deallocated Read Value: All 0x00 00:12:18.043 Deallocate in Write Zeroes: Not Supported 00:12:18.043 Deallocated Guard Field: 0xFFFF 00:12:18.043 Flush: Supported 00:12:18.043 Reservation: Not Supported 00:12:18.043 Namespace Sharing Capabilities: Private 00:12:18.043 Size (in LBAs): 1310720 (5GiB) 00:12:18.043 Capacity (in LBAs): 1310720 (5GiB) 00:12:18.043 Utilization (in LBAs): 1310720 (5GiB) 00:12:18.043 Thin Provisioning: Not Supported 00:12:18.043 Per-NS Atomic Units: No 00:12:18.043 Maximum Single Source Range Length: 128 00:12:18.043 Maximum Copy Length: 128 00:12:18.043 Maximum Source Range Count: 128 00:12:18.043 NGUID/EUI64 Never Reused: No 00:12:18.043 Namespace Write Protected: No 00:12:18.043 Number of LBA Formats: 8 00:12:18.043 Current LBA Format: LBA Format #04 00:12:18.043 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.043 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:18.043 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:18.043 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:18.043 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:18.043 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:18.043 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:18.043 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:18.043 00:12:18.043 NVM Specific Namespace Data 00:12:18.043 =========================== 00:12:18.043 Logical Block Storage Tag Mask: 0 00:12:18.043 Protection Information Capabilities: 00:12:18.043 16b Guard Protection Information Storage Tag Support: No 00:12:18.043 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:18.043 Storage Tag Check Read Support: No 00:12:18.043 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.043 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.043 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.043 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.043 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.043 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.043 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.043 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.043 10:17:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:18.043 10:17:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:12:18.303 ===================================================== 00:12:18.303 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:18.303 ===================================================== 00:12:18.303 Controller Capabilities/Features 00:12:18.303 ================================ 00:12:18.303 Vendor ID: 1b36 00:12:18.303 Subsystem Vendor ID: 1af4 00:12:18.303 Serial Number: 12342 00:12:18.303 Model Number: QEMU NVMe Ctrl 00:12:18.303 Firmware Version: 8.0.0 00:12:18.303 Recommended Arb Burst: 6 00:12:18.303 IEEE OUI Identifier: 00 54 52 00:12:18.303 Multi-path I/O 00:12:18.303 May have multiple subsystem ports: No 00:12:18.303 May have multiple controllers: No 00:12:18.303 Associated with SR-IOV VF: No 00:12:18.303 Max Data Transfer Size: 524288 00:12:18.303 Max Number of Namespaces: 256 00:12:18.303 Max Number of I/O Queues: 64 00:12:18.303 NVMe Specification Version (VS): 1.4 00:12:18.303 NVMe Specification Version (Identify): 1.4 00:12:18.303 Maximum Queue Entries: 2048 00:12:18.303 Contiguous Queues Required: Yes 00:12:18.303 Arbitration Mechanisms Supported 00:12:18.303 Weighted Round Robin: Not Supported 00:12:18.303 Vendor Specific: Not Supported 00:12:18.303 Reset Timeout: 7500 ms 00:12:18.303 Doorbell Stride: 4 bytes 00:12:18.303 NVM Subsystem Reset: Not Supported 00:12:18.303 Command Sets Supported 00:12:18.303 NVM Command Set: Supported 00:12:18.303 Boot Partition: Not Supported 00:12:18.303 Memory Page Size Minimum: 4096 bytes 00:12:18.303 Memory Page Size Maximum: 65536 bytes 00:12:18.303 Persistent Memory Region: Not Supported 00:12:18.303 Optional Asynchronous Events Supported 00:12:18.303 Namespace Attribute Notices: Supported 00:12:18.303 Firmware Activation Notices: Not Supported 00:12:18.303 ANA Change Notices: Not Supported 00:12:18.303 PLE Aggregate Log Change Notices: Not Supported 00:12:18.303 LBA Status Info Alert Notices: Not Supported 00:12:18.303 EGE Aggregate Log Change Notices: Not Supported 00:12:18.303 Normal NVM Subsystem Shutdown event: Not Supported 00:12:18.303 Zone Descriptor Change Notices: Not Supported 00:12:18.303 Discovery Log Change Notices: Not Supported 00:12:18.303 Controller Attributes 00:12:18.303 128-bit Host Identifier: Not Supported 00:12:18.303 Non-Operational Permissive Mode: Not Supported 00:12:18.303 NVM Sets: Not Supported 00:12:18.303 Read Recovery Levels: Not Supported 00:12:18.303 Endurance Groups: Not Supported 00:12:18.303 Predictable Latency Mode: Not Supported 00:12:18.303 Traffic Based Keep ALive: Not Supported 00:12:18.303 Namespace Granularity: Not Supported 00:12:18.303 SQ Associations: Not Supported 00:12:18.303 UUID List: Not Supported 00:12:18.303 Multi-Domain Subsystem: Not Supported 00:12:18.303 Fixed Capacity Management: Not Supported 00:12:18.303 Variable Capacity Management: Not Supported 00:12:18.303 Delete Endurance Group: Not Supported 00:12:18.303 Delete NVM Set: Not Supported 00:12:18.303 Extended LBA Formats Supported: Supported 00:12:18.303 Flexible Data Placement Supported: Not Supported 00:12:18.303 00:12:18.303 Controller Memory Buffer Support 00:12:18.303 ================================ 00:12:18.304 Supported: No 00:12:18.304 00:12:18.304 Persistent Memory Region Support 00:12:18.304 ================================ 00:12:18.304 Supported: No 00:12:18.304 00:12:18.304 Admin Command Set Attributes 00:12:18.304 ============================ 00:12:18.304 Security Send/Receive: Not Supported 00:12:18.304 Format NVM: Supported 00:12:18.304 Firmware Activate/Download: Not Supported 00:12:18.304 Namespace Management: Supported 00:12:18.304 Device Self-Test: Not Supported 00:12:18.304 Directives: Supported 00:12:18.304 NVMe-MI: Not Supported 00:12:18.304 Virtualization Management: Not Supported 00:12:18.304 Doorbell Buffer Config: Supported 00:12:18.304 Get LBA Status Capability: Not Supported 00:12:18.304 Command & Feature Lockdown Capability: Not Supported 00:12:18.304 Abort Command Limit: 4 00:12:18.304 Async Event Request Limit: 4 00:12:18.304 Number of Firmware Slots: N/A 00:12:18.304 Firmware Slot 1 Read-Only: N/A 00:12:18.304 Firmware Activation Without Reset: N/A 00:12:18.304 Multiple Update Detection Support: N/A 00:12:18.304 Firmware Update Granularity: No Information Provided 00:12:18.304 Per-Namespace SMART Log: Yes 00:12:18.304 Asymmetric Namespace Access Log Page: Not Supported 00:12:18.304 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:18.304 Command Effects Log Page: Supported 00:12:18.304 Get Log Page Extended Data: Supported 00:12:18.304 Telemetry Log Pages: Not Supported 00:12:18.304 Persistent Event Log Pages: Not Supported 00:12:18.304 Supported Log Pages Log Page: May Support 00:12:18.304 Commands Supported & Effects Log Page: Not Supported 00:12:18.304 Feature Identifiers & Effects Log Page:May Support 00:12:18.304 NVMe-MI Commands & Effects Log Page: May Support 00:12:18.304 Data Area 4 for Telemetry Log: Not Supported 00:12:18.304 Error Log Page Entries Supported: 1 00:12:18.304 Keep Alive: Not Supported 00:12:18.304 00:12:18.304 NVM Command Set Attributes 00:12:18.304 ========================== 00:12:18.304 Submission Queue Entry Size 00:12:18.304 Max: 64 00:12:18.304 Min: 64 00:12:18.304 Completion Queue Entry Size 00:12:18.304 Max: 16 00:12:18.304 Min: 16 00:12:18.304 Number of Namespaces: 256 00:12:18.304 Compare Command: Supported 00:12:18.304 Write Uncorrectable Command: Not Supported 00:12:18.304 Dataset Management Command: Supported 00:12:18.304 Write Zeroes Command: Supported 00:12:18.304 Set Features Save Field: Supported 00:12:18.304 Reservations: Not Supported 00:12:18.304 Timestamp: Supported 00:12:18.304 Copy: Supported 00:12:18.304 Volatile Write Cache: Present 00:12:18.304 Atomic Write Unit (Normal): 1 00:12:18.304 Atomic Write Unit (PFail): 1 00:12:18.304 Atomic Compare & Write Unit: 1 00:12:18.304 Fused Compare & Write: Not Supported 00:12:18.304 Scatter-Gather List 00:12:18.304 SGL Command Set: Supported 00:12:18.304 SGL Keyed: Not Supported 00:12:18.304 SGL Bit Bucket Descriptor: Not Supported 00:12:18.304 SGL Metadata Pointer: Not Supported 00:12:18.304 Oversized SGL: Not Supported 00:12:18.304 SGL Metadata Address: Not Supported 00:12:18.304 SGL Offset: Not Supported 00:12:18.304 Transport SGL Data Block: Not Supported 00:12:18.304 Replay Protected Memory Block: Not Supported 00:12:18.304 00:12:18.304 Firmware Slot Information 00:12:18.304 ========================= 00:12:18.304 Active slot: 1 00:12:18.304 Slot 1 Firmware Revision: 1.0 00:12:18.304 00:12:18.304 00:12:18.304 Commands Supported and Effects 00:12:18.304 ============================== 00:12:18.304 Admin Commands 00:12:18.304 -------------- 00:12:18.304 Delete I/O Submission Queue (00h): Supported 00:12:18.304 Create I/O Submission Queue (01h): Supported 00:12:18.304 Get Log Page (02h): Supported 00:12:18.304 Delete I/O Completion Queue (04h): Supported 00:12:18.304 Create I/O Completion Queue (05h): Supported 00:12:18.304 Identify (06h): Supported 00:12:18.304 Abort (08h): Supported 00:12:18.304 Set Features (09h): Supported 00:12:18.304 Get Features (0Ah): Supported 00:12:18.304 Asynchronous Event Request (0Ch): Supported 00:12:18.304 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:18.304 Directive Send (19h): Supported 00:12:18.304 Directive Receive (1Ah): Supported 00:12:18.304 Virtualization Management (1Ch): Supported 00:12:18.304 Doorbell Buffer Config (7Ch): Supported 00:12:18.304 Format NVM (80h): Supported LBA-Change 00:12:18.304 I/O Commands 00:12:18.304 ------------ 00:12:18.304 Flush (00h): Supported LBA-Change 00:12:18.304 Write (01h): Supported LBA-Change 00:12:18.304 Read (02h): Supported 00:12:18.304 Compare (05h): Supported 00:12:18.304 Write Zeroes (08h): Supported LBA-Change 00:12:18.304 Dataset Management (09h): Supported LBA-Change 00:12:18.304 Unknown (0Ch): Supported 00:12:18.304 Unknown (12h): Supported 00:12:18.304 Copy (19h): Supported LBA-Change 00:12:18.304 Unknown (1Dh): Supported LBA-Change 00:12:18.304 00:12:18.304 Error Log 00:12:18.304 ========= 00:12:18.304 00:12:18.304 Arbitration 00:12:18.304 =========== 00:12:18.304 Arbitration Burst: no limit 00:12:18.304 00:12:18.304 Power Management 00:12:18.304 ================ 00:12:18.304 Number of Power States: 1 00:12:18.304 Current Power State: Power State #0 00:12:18.304 Power State #0: 00:12:18.304 Max Power: 25.00 W 00:12:18.304 Non-Operational State: Operational 00:12:18.304 Entry Latency: 16 microseconds 00:12:18.304 Exit Latency: 4 microseconds 00:12:18.304 Relative Read Throughput: 0 00:12:18.304 Relative Read Latency: 0 00:12:18.304 Relative Write Throughput: 0 00:12:18.304 Relative Write Latency: 0 00:12:18.304 Idle Power: Not Reported 00:12:18.304 Active Power: Not Reported 00:12:18.304 Non-Operational Permissive Mode: Not Supported 00:12:18.304 00:12:18.304 Health Information 00:12:18.304 ================== 00:12:18.304 Critical Warnings: 00:12:18.304 Available Spare Space: OK 00:12:18.304 Temperature: OK 00:12:18.304 Device Reliability: OK 00:12:18.304 Read Only: No 00:12:18.304 Volatile Memory Backup: OK 00:12:18.304 Current Temperature: 323 Kelvin (50 Celsius) 00:12:18.304 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:18.304 Available Spare: 0% 00:12:18.304 Available Spare Threshold: 0% 00:12:18.304 Life Percentage Used: 0% 00:12:18.304 Data Units Read: 2457 00:12:18.304 Data Units Written: 2244 00:12:18.305 Host Read Commands: 112077 00:12:18.305 Host Write Commands: 110346 00:12:18.305 Controller Busy Time: 0 minutes 00:12:18.305 Power Cycles: 0 00:12:18.305 Power On Hours: 0 hours 00:12:18.305 Unsafe Shutdowns: 0 00:12:18.305 Unrecoverable Media Errors: 0 00:12:18.305 Lifetime Error Log Entries: 0 00:12:18.305 Warning Temperature Time: 0 minutes 00:12:18.305 Critical Temperature Time: 0 minutes 00:12:18.305 00:12:18.305 Number of Queues 00:12:18.305 ================ 00:12:18.305 Number of I/O Submission Queues: 64 00:12:18.305 Number of I/O Completion Queues: 64 00:12:18.305 00:12:18.305 ZNS Specific Controller Data 00:12:18.305 ============================ 00:12:18.305 Zone Append Size Limit: 0 00:12:18.305 00:12:18.305 00:12:18.305 Active Namespaces 00:12:18.305 ================= 00:12:18.305 Namespace ID:1 00:12:18.305 Error Recovery Timeout: Unlimited 00:12:18.305 Command Set Identifier: NVM (00h) 00:12:18.305 Deallocate: Supported 00:12:18.305 Deallocated/Unwritten Error: Supported 00:12:18.305 Deallocated Read Value: All 0x00 00:12:18.305 Deallocate in Write Zeroes: Not Supported 00:12:18.305 Deallocated Guard Field: 0xFFFF 00:12:18.305 Flush: Supported 00:12:18.305 Reservation: Not Supported 00:12:18.305 Namespace Sharing Capabilities: Private 00:12:18.305 Size (in LBAs): 1048576 (4GiB) 00:12:18.305 Capacity (in LBAs): 1048576 (4GiB) 00:12:18.305 Utilization (in LBAs): 1048576 (4GiB) 00:12:18.305 Thin Provisioning: Not Supported 00:12:18.305 Per-NS Atomic Units: No 00:12:18.305 Maximum Single Source Range Length: 128 00:12:18.305 Maximum Copy Length: 128 00:12:18.305 Maximum Source Range Count: 128 00:12:18.305 NGUID/EUI64 Never Reused: No 00:12:18.305 Namespace Write Protected: No 00:12:18.305 Number of LBA Formats: 8 00:12:18.305 Current LBA Format: LBA Format #04 00:12:18.305 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.305 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:18.305 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:18.305 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:18.305 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:18.305 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:18.305 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:18.305 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:18.305 00:12:18.305 NVM Specific Namespace Data 00:12:18.305 =========================== 00:12:18.305 Logical Block Storage Tag Mask: 0 00:12:18.305 Protection Information Capabilities: 00:12:18.305 16b Guard Protection Information Storage Tag Support: No 00:12:18.305 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:18.305 Storage Tag Check Read Support: No 00:12:18.305 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Namespace ID:2 00:12:18.305 Error Recovery Timeout: Unlimited 00:12:18.305 Command Set Identifier: NVM (00h) 00:12:18.305 Deallocate: Supported 00:12:18.305 Deallocated/Unwritten Error: Supported 00:12:18.305 Deallocated Read Value: All 0x00 00:12:18.305 Deallocate in Write Zeroes: Not Supported 00:12:18.305 Deallocated Guard Field: 0xFFFF 00:12:18.305 Flush: Supported 00:12:18.305 Reservation: Not Supported 00:12:18.305 Namespace Sharing Capabilities: Private 00:12:18.305 Size (in LBAs): 1048576 (4GiB) 00:12:18.305 Capacity (in LBAs): 1048576 (4GiB) 00:12:18.305 Utilization (in LBAs): 1048576 (4GiB) 00:12:18.305 Thin Provisioning: Not Supported 00:12:18.305 Per-NS Atomic Units: No 00:12:18.305 Maximum Single Source Range Length: 128 00:12:18.305 Maximum Copy Length: 128 00:12:18.305 Maximum Source Range Count: 128 00:12:18.305 NGUID/EUI64 Never Reused: No 00:12:18.305 Namespace Write Protected: No 00:12:18.305 Number of LBA Formats: 8 00:12:18.305 Current LBA Format: LBA Format #04 00:12:18.305 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.305 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:18.305 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:18.305 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:18.305 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:18.305 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:18.305 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:18.305 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:18.305 00:12:18.305 NVM Specific Namespace Data 00:12:18.305 =========================== 00:12:18.305 Logical Block Storage Tag Mask: 0 00:12:18.305 Protection Information Capabilities: 00:12:18.305 16b Guard Protection Information Storage Tag Support: No 00:12:18.305 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:18.305 Storage Tag Check Read Support: No 00:12:18.305 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.305 Namespace ID:3 00:12:18.305 Error Recovery Timeout: Unlimited 00:12:18.305 Command Set Identifier: NVM (00h) 00:12:18.305 Deallocate: Supported 00:12:18.305 Deallocated/Unwritten Error: Supported 00:12:18.305 Deallocated Read Value: All 0x00 00:12:18.305 Deallocate in Write Zeroes: Not Supported 00:12:18.305 Deallocated Guard Field: 0xFFFF 00:12:18.305 Flush: Supported 00:12:18.305 Reservation: Not Supported 00:12:18.306 Namespace Sharing Capabilities: Private 00:12:18.306 Size (in LBAs): 1048576 (4GiB) 00:12:18.306 Capacity (in LBAs): 1048576 (4GiB) 00:12:18.306 Utilization (in LBAs): 1048576 (4GiB) 00:12:18.306 Thin Provisioning: Not Supported 00:12:18.306 Per-NS Atomic Units: No 00:12:18.306 Maximum Single Source Range Length: 128 00:12:18.306 Maximum Copy Length: 128 00:12:18.306 Maximum Source Range Count: 128 00:12:18.306 NGUID/EUI64 Never Reused: No 00:12:18.306 Namespace Write Protected: No 00:12:18.306 Number of LBA Formats: 8 00:12:18.306 Current LBA Format: LBA Format #04 00:12:18.306 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.306 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:18.306 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:18.306 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:18.306 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:18.306 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:18.306 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:18.306 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:18.306 00:12:18.306 NVM Specific Namespace Data 00:12:18.306 =========================== 00:12:18.306 Logical Block Storage Tag Mask: 0 00:12:18.306 Protection Information Capabilities: 00:12:18.306 16b Guard Protection Information Storage Tag Support: No 00:12:18.306 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:18.306 Storage Tag Check Read Support: No 00:12:18.306 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.306 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.306 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.306 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.306 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.306 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.306 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.306 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.306 10:17:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:18.306 10:17:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:12:18.566 ===================================================== 00:12:18.566 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:18.566 ===================================================== 00:12:18.566 Controller Capabilities/Features 00:12:18.566 ================================ 00:12:18.566 Vendor ID: 1b36 00:12:18.566 Subsystem Vendor ID: 1af4 00:12:18.566 Serial Number: 12343 00:12:18.566 Model Number: QEMU NVMe Ctrl 00:12:18.566 Firmware Version: 8.0.0 00:12:18.566 Recommended Arb Burst: 6 00:12:18.566 IEEE OUI Identifier: 00 54 52 00:12:18.566 Multi-path I/O 00:12:18.566 May have multiple subsystem ports: No 00:12:18.566 May have multiple controllers: Yes 00:12:18.566 Associated with SR-IOV VF: No 00:12:18.566 Max Data Transfer Size: 524288 00:12:18.566 Max Number of Namespaces: 256 00:12:18.566 Max Number of I/O Queues: 64 00:12:18.566 NVMe Specification Version (VS): 1.4 00:12:18.566 NVMe Specification Version (Identify): 1.4 00:12:18.566 Maximum Queue Entries: 2048 00:12:18.566 Contiguous Queues Required: Yes 00:12:18.566 Arbitration Mechanisms Supported 00:12:18.566 Weighted Round Robin: Not Supported 00:12:18.566 Vendor Specific: Not Supported 00:12:18.566 Reset Timeout: 7500 ms 00:12:18.566 Doorbell Stride: 4 bytes 00:12:18.566 NVM Subsystem Reset: Not Supported 00:12:18.566 Command Sets Supported 00:12:18.566 NVM Command Set: Supported 00:12:18.566 Boot Partition: Not Supported 00:12:18.566 Memory Page Size Minimum: 4096 bytes 00:12:18.566 Memory Page Size Maximum: 65536 bytes 00:12:18.566 Persistent Memory Region: Not Supported 00:12:18.566 Optional Asynchronous Events Supported 00:12:18.566 Namespace Attribute Notices: Supported 00:12:18.566 Firmware Activation Notices: Not Supported 00:12:18.566 ANA Change Notices: Not Supported 00:12:18.566 PLE Aggregate Log Change Notices: Not Supported 00:12:18.566 LBA Status Info Alert Notices: Not Supported 00:12:18.566 EGE Aggregate Log Change Notices: Not Supported 00:12:18.566 Normal NVM Subsystem Shutdown event: Not Supported 00:12:18.566 Zone Descriptor Change Notices: Not Supported 00:12:18.566 Discovery Log Change Notices: Not Supported 00:12:18.566 Controller Attributes 00:12:18.566 128-bit Host Identifier: Not Supported 00:12:18.566 Non-Operational Permissive Mode: Not Supported 00:12:18.566 NVM Sets: Not Supported 00:12:18.566 Read Recovery Levels: Not Supported 00:12:18.566 Endurance Groups: Supported 00:12:18.566 Predictable Latency Mode: Not Supported 00:12:18.566 Traffic Based Keep ALive: Not Supported 00:12:18.566 Namespace Granularity: Not Supported 00:12:18.566 SQ Associations: Not Supported 00:12:18.566 UUID List: Not Supported 00:12:18.566 Multi-Domain Subsystem: Not Supported 00:12:18.566 Fixed Capacity Management: Not Supported 00:12:18.566 Variable Capacity Management: Not Supported 00:12:18.566 Delete Endurance Group: Not Supported 00:12:18.566 Delete NVM Set: Not Supported 00:12:18.566 Extended LBA Formats Supported: Supported 00:12:18.566 Flexible Data Placement Supported: Supported 00:12:18.566 00:12:18.566 Controller Memory Buffer Support 00:12:18.566 ================================ 00:12:18.566 Supported: No 00:12:18.566 00:12:18.566 Persistent Memory Region Support 00:12:18.566 ================================ 00:12:18.566 Supported: No 00:12:18.566 00:12:18.566 Admin Command Set Attributes 00:12:18.566 ============================ 00:12:18.566 Security Send/Receive: Not Supported 00:12:18.566 Format NVM: Supported 00:12:18.566 Firmware Activate/Download: Not Supported 00:12:18.566 Namespace Management: Supported 00:12:18.566 Device Self-Test: Not Supported 00:12:18.566 Directives: Supported 00:12:18.566 NVMe-MI: Not Supported 00:12:18.566 Virtualization Management: Not Supported 00:12:18.566 Doorbell Buffer Config: Supported 00:12:18.566 Get LBA Status Capability: Not Supported 00:12:18.566 Command & Feature Lockdown Capability: Not Supported 00:12:18.566 Abort Command Limit: 4 00:12:18.566 Async Event Request Limit: 4 00:12:18.566 Number of Firmware Slots: N/A 00:12:18.566 Firmware Slot 1 Read-Only: N/A 00:12:18.566 Firmware Activation Without Reset: N/A 00:12:18.566 Multiple Update Detection Support: N/A 00:12:18.566 Firmware Update Granularity: No Information Provided 00:12:18.566 Per-Namespace SMART Log: Yes 00:12:18.566 Asymmetric Namespace Access Log Page: Not Supported 00:12:18.567 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:18.567 Command Effects Log Page: Supported 00:12:18.567 Get Log Page Extended Data: Supported 00:12:18.567 Telemetry Log Pages: Not Supported 00:12:18.567 Persistent Event Log Pages: Not Supported 00:12:18.567 Supported Log Pages Log Page: May Support 00:12:18.567 Commands Supported & Effects Log Page: Not Supported 00:12:18.567 Feature Identifiers & Effects Log Page:May Support 00:12:18.567 NVMe-MI Commands & Effects Log Page: May Support 00:12:18.567 Data Area 4 for Telemetry Log: Not Supported 00:12:18.567 Error Log Page Entries Supported: 1 00:12:18.567 Keep Alive: Not Supported 00:12:18.567 00:12:18.567 NVM Command Set Attributes 00:12:18.567 ========================== 00:12:18.567 Submission Queue Entry Size 00:12:18.567 Max: 64 00:12:18.567 Min: 64 00:12:18.567 Completion Queue Entry Size 00:12:18.567 Max: 16 00:12:18.567 Min: 16 00:12:18.567 Number of Namespaces: 256 00:12:18.567 Compare Command: Supported 00:12:18.567 Write Uncorrectable Command: Not Supported 00:12:18.567 Dataset Management Command: Supported 00:12:18.567 Write Zeroes Command: Supported 00:12:18.567 Set Features Save Field: Supported 00:12:18.567 Reservations: Not Supported 00:12:18.567 Timestamp: Supported 00:12:18.567 Copy: Supported 00:12:18.567 Volatile Write Cache: Present 00:12:18.567 Atomic Write Unit (Normal): 1 00:12:18.567 Atomic Write Unit (PFail): 1 00:12:18.567 Atomic Compare & Write Unit: 1 00:12:18.567 Fused Compare & Write: Not Supported 00:12:18.567 Scatter-Gather List 00:12:18.567 SGL Command Set: Supported 00:12:18.567 SGL Keyed: Not Supported 00:12:18.567 SGL Bit Bucket Descriptor: Not Supported 00:12:18.567 SGL Metadata Pointer: Not Supported 00:12:18.567 Oversized SGL: Not Supported 00:12:18.567 SGL Metadata Address: Not Supported 00:12:18.567 SGL Offset: Not Supported 00:12:18.567 Transport SGL Data Block: Not Supported 00:12:18.567 Replay Protected Memory Block: Not Supported 00:12:18.567 00:12:18.567 Firmware Slot Information 00:12:18.567 ========================= 00:12:18.567 Active slot: 1 00:12:18.567 Slot 1 Firmware Revision: 1.0 00:12:18.567 00:12:18.567 00:12:18.567 Commands Supported and Effects 00:12:18.567 ============================== 00:12:18.567 Admin Commands 00:12:18.567 -------------- 00:12:18.567 Delete I/O Submission Queue (00h): Supported 00:12:18.567 Create I/O Submission Queue (01h): Supported 00:12:18.567 Get Log Page (02h): Supported 00:12:18.567 Delete I/O Completion Queue (04h): Supported 00:12:18.567 Create I/O Completion Queue (05h): Supported 00:12:18.567 Identify (06h): Supported 00:12:18.567 Abort (08h): Supported 00:12:18.567 Set Features (09h): Supported 00:12:18.567 Get Features (0Ah): Supported 00:12:18.567 Asynchronous Event Request (0Ch): Supported 00:12:18.567 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:18.567 Directive Send (19h): Supported 00:12:18.567 Directive Receive (1Ah): Supported 00:12:18.567 Virtualization Management (1Ch): Supported 00:12:18.567 Doorbell Buffer Config (7Ch): Supported 00:12:18.567 Format NVM (80h): Supported LBA-Change 00:12:18.567 I/O Commands 00:12:18.567 ------------ 00:12:18.567 Flush (00h): Supported LBA-Change 00:12:18.567 Write (01h): Supported LBA-Change 00:12:18.567 Read (02h): Supported 00:12:18.567 Compare (05h): Supported 00:12:18.567 Write Zeroes (08h): Supported LBA-Change 00:12:18.567 Dataset Management (09h): Supported LBA-Change 00:12:18.567 Unknown (0Ch): Supported 00:12:18.567 Unknown (12h): Supported 00:12:18.567 Copy (19h): Supported LBA-Change 00:12:18.567 Unknown (1Dh): Supported LBA-Change 00:12:18.567 00:12:18.567 Error Log 00:12:18.567 ========= 00:12:18.567 00:12:18.567 Arbitration 00:12:18.567 =========== 00:12:18.567 Arbitration Burst: no limit 00:12:18.567 00:12:18.567 Power Management 00:12:18.567 ================ 00:12:18.567 Number of Power States: 1 00:12:18.567 Current Power State: Power State #0 00:12:18.567 Power State #0: 00:12:18.567 Max Power: 25.00 W 00:12:18.567 Non-Operational State: Operational 00:12:18.567 Entry Latency: 16 microseconds 00:12:18.567 Exit Latency: 4 microseconds 00:12:18.567 Relative Read Throughput: 0 00:12:18.567 Relative Read Latency: 0 00:12:18.567 Relative Write Throughput: 0 00:12:18.567 Relative Write Latency: 0 00:12:18.567 Idle Power: Not Reported 00:12:18.567 Active Power: Not Reported 00:12:18.567 Non-Operational Permissive Mode: Not Supported 00:12:18.567 00:12:18.567 Health Information 00:12:18.567 ================== 00:12:18.567 Critical Warnings: 00:12:18.567 Available Spare Space: OK 00:12:18.567 Temperature: OK 00:12:18.567 Device Reliability: OK 00:12:18.567 Read Only: No 00:12:18.567 Volatile Memory Backup: OK 00:12:18.567 Current Temperature: 323 Kelvin (50 Celsius) 00:12:18.567 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:18.567 Available Spare: 0% 00:12:18.567 Available Spare Threshold: 0% 00:12:18.567 Life Percentage Used: 0% 00:12:18.567 Data Units Read: 883 00:12:18.567 Data Units Written: 812 00:12:18.567 Host Read Commands: 37939 00:12:18.567 Host Write Commands: 37362 00:12:18.567 Controller Busy Time: 0 minutes 00:12:18.567 Power Cycles: 0 00:12:18.567 Power On Hours: 0 hours 00:12:18.567 Unsafe Shutdowns: 0 00:12:18.567 Unrecoverable Media Errors: 0 00:12:18.567 Lifetime Error Log Entries: 0 00:12:18.567 Warning Temperature Time: 0 minutes 00:12:18.567 Critical Temperature Time: 0 minutes 00:12:18.567 00:12:18.567 Number of Queues 00:12:18.567 ================ 00:12:18.567 Number of I/O Submission Queues: 64 00:12:18.567 Number of I/O Completion Queues: 64 00:12:18.567 00:12:18.567 ZNS Specific Controller Data 00:12:18.567 ============================ 00:12:18.567 Zone Append Size Limit: 0 00:12:18.567 00:12:18.567 00:12:18.567 Active Namespaces 00:12:18.567 ================= 00:12:18.567 Namespace ID:1 00:12:18.567 Error Recovery Timeout: Unlimited 00:12:18.567 Command Set Identifier: NVM (00h) 00:12:18.567 Deallocate: Supported 00:12:18.567 Deallocated/Unwritten Error: Supported 00:12:18.567 Deallocated Read Value: All 0x00 00:12:18.567 Deallocate in Write Zeroes: Not Supported 00:12:18.567 Deallocated Guard Field: 0xFFFF 00:12:18.567 Flush: Supported 00:12:18.567 Reservation: Not Supported 00:12:18.567 Namespace Sharing Capabilities: Multiple Controllers 00:12:18.567 Size (in LBAs): 262144 (1GiB) 00:12:18.567 Capacity (in LBAs): 262144 (1GiB) 00:12:18.567 Utilization (in LBAs): 262144 (1GiB) 00:12:18.567 Thin Provisioning: Not Supported 00:12:18.567 Per-NS Atomic Units: No 00:12:18.567 Maximum Single Source Range Length: 128 00:12:18.567 Maximum Copy Length: 128 00:12:18.567 Maximum Source Range Count: 128 00:12:18.567 NGUID/EUI64 Never Reused: No 00:12:18.567 Namespace Write Protected: No 00:12:18.567 Endurance group ID: 1 00:12:18.567 Number of LBA Formats: 8 00:12:18.567 Current LBA Format: LBA Format #04 00:12:18.567 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.567 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:18.567 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:18.567 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:18.567 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:18.567 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:18.567 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:18.567 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:18.567 00:12:18.567 Get Feature FDP: 00:12:18.567 ================ 00:12:18.567 Enabled: Yes 00:12:18.567 FDP configuration index: 0 00:12:18.567 00:12:18.567 FDP configurations log page 00:12:18.567 =========================== 00:12:18.567 Number of FDP configurations: 1 00:12:18.567 Version: 0 00:12:18.567 Size: 112 00:12:18.567 FDP Configuration Descriptor: 0 00:12:18.567 Descriptor Size: 96 00:12:18.567 Reclaim Group Identifier format: 2 00:12:18.567 FDP Volatile Write Cache: Not Present 00:12:18.567 FDP Configuration: Valid 00:12:18.567 Vendor Specific Size: 0 00:12:18.567 Number of Reclaim Groups: 2 00:12:18.567 Number of Recalim Unit Handles: 8 00:12:18.567 Max Placement Identifiers: 128 00:12:18.567 Number of Namespaces Suppprted: 256 00:12:18.567 Reclaim unit Nominal Size: 6000000 bytes 00:12:18.567 Estimated Reclaim Unit Time Limit: Not Reported 00:12:18.567 RUH Desc #000: RUH Type: Initially Isolated 00:12:18.567 RUH Desc #001: RUH Type: Initially Isolated 00:12:18.567 RUH Desc #002: RUH Type: Initially Isolated 00:12:18.567 RUH Desc #003: RUH Type: Initially Isolated 00:12:18.567 RUH Desc #004: RUH Type: Initially Isolated 00:12:18.567 RUH Desc #005: RUH Type: Initially Isolated 00:12:18.567 RUH Desc #006: RUH Type: Initially Isolated 00:12:18.567 RUH Desc #007: RUH Type: Initially Isolated 00:12:18.567 00:12:18.567 FDP reclaim unit handle usage log page 00:12:18.568 ====================================== 00:12:18.568 Number of Reclaim Unit Handles: 8 00:12:18.568 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:18.568 RUH Usage Desc #001: RUH Attributes: Unused 00:12:18.568 RUH Usage Desc #002: RUH Attributes: Unused 00:12:18.568 RUH Usage Desc #003: RUH Attributes: Unused 00:12:18.568 RUH Usage Desc #004: RUH Attributes: Unused 00:12:18.568 RUH Usage Desc #005: RUH Attributes: Unused 00:12:18.568 RUH Usage Desc #006: RUH Attributes: Unused 00:12:18.568 RUH Usage Desc #007: RUH Attributes: Unused 00:12:18.568 00:12:18.568 FDP statistics log page 00:12:18.568 ======================= 00:12:18.568 Host bytes with metadata written: 520855552 00:12:18.568 Media bytes with metadata written: 520912896 00:12:18.568 Media bytes erased: 0 00:12:18.568 00:12:18.568 FDP events log page 00:12:18.568 =================== 00:12:18.568 Number of FDP events: 0 00:12:18.568 00:12:18.568 NVM Specific Namespace Data 00:12:18.568 =========================== 00:12:18.568 Logical Block Storage Tag Mask: 0 00:12:18.568 Protection Information Capabilities: 00:12:18.568 16b Guard Protection Information Storage Tag Support: No 00:12:18.568 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:18.568 Storage Tag Check Read Support: No 00:12:18.568 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.568 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.568 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.568 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.568 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.568 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.568 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.568 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:18.568 ************************************ 00:12:18.568 END TEST nvme_identify 00:12:18.568 ************************************ 00:12:18.568 00:12:18.568 real 0m1.737s 00:12:18.568 user 0m0.640s 00:12:18.568 sys 0m0.889s 00:12:18.568 10:17:25 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.568 10:17:25 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:12:18.568 10:17:25 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:12:18.568 10:17:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.568 10:17:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.568 10:17:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.568 ************************************ 00:12:18.568 START TEST nvme_perf 00:12:18.568 ************************************ 00:12:18.568 10:17:25 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:12:18.568 10:17:25 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:19.947 Initializing NVMe Controllers 00:12:19.947 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:19.947 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:19.947 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:19.947 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:19.947 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:19.947 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:19.947 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:19.947 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:19.947 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:19.947 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:19.947 Initialization complete. Launching workers. 00:12:19.947 ======================================================== 00:12:19.947 Latency(us) 00:12:19.947 Device Information : IOPS MiB/s Average min max 00:12:19.947 PCIE (0000:00:10.0) NSID 1 from core 0: 13148.05 154.08 9763.81 7941.19 51551.13 00:12:19.947 PCIE (0000:00:11.0) NSID 1 from core 0: 13148.05 154.08 9751.17 7487.06 49801.91 00:12:19.947 PCIE (0000:00:13.0) NSID 1 from core 0: 13148.05 154.08 9736.61 8040.16 48910.44 00:12:19.947 PCIE (0000:00:12.0) NSID 1 from core 0: 13148.05 154.08 9721.57 8067.33 46988.70 00:12:19.947 PCIE (0000:00:12.0) NSID 2 from core 0: 13148.05 154.08 9705.53 8041.96 45026.94 00:12:19.947 PCIE (0000:00:12.0) NSID 3 from core 0: 13148.05 154.08 9688.31 8035.73 43003.41 00:12:19.947 ======================================================== 00:12:19.947 Total : 78888.32 924.47 9727.83 7487.06 51551.13 00:12:19.947 00:12:19.947 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:19.947 ================================================================================= 00:12:19.947 1.00000% : 8159.100us 00:12:19.947 10.00000% : 8422.297us 00:12:19.947 25.00000% : 8685.494us 00:12:19.947 50.00000% : 9001.330us 00:12:19.947 75.00000% : 9580.363us 00:12:19.947 90.00000% : 11580.659us 00:12:19.947 95.00000% : 12370.249us 00:12:19.947 98.00000% : 14528.463us 00:12:19.947 99.00000% : 15791.807us 00:12:19.947 99.50000% : 41690.371us 00:12:19.947 99.90000% : 51165.455us 00:12:19.947 99.99000% : 51586.570us 00:12:19.947 99.99900% : 51586.570us 00:12:19.947 99.99990% : 51586.570us 00:12:19.947 99.99999% : 51586.570us 00:12:19.947 00:12:19.947 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:19.947 ================================================================================= 00:12:19.947 1.00000% : 8211.740us 00:12:19.947 10.00000% : 8474.937us 00:12:19.947 25.00000% : 8685.494us 00:12:19.947 50.00000% : 9001.330us 00:12:19.947 75.00000% : 9580.363us 00:12:19.947 90.00000% : 11528.019us 00:12:19.947 95.00000% : 12422.888us 00:12:19.947 98.00000% : 14317.905us 00:12:19.947 99.00000% : 16212.922us 00:12:19.947 99.50000% : 41479.814us 00:12:19.947 99.90000% : 49480.996us 00:12:19.947 99.99000% : 49902.111us 00:12:19.947 99.99900% : 49902.111us 00:12:19.947 99.99990% : 49902.111us 00:12:19.947 99.99999% : 49902.111us 00:12:19.947 00:12:19.947 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:19.947 ================================================================================= 00:12:19.947 1.00000% : 8211.740us 00:12:19.947 10.00000% : 8474.937us 00:12:19.947 25.00000% : 8685.494us 00:12:19.947 50.00000% : 9001.330us 00:12:19.947 75.00000% : 9580.363us 00:12:19.947 90.00000% : 11528.019us 00:12:19.947 95.00000% : 12317.610us 00:12:19.947 98.00000% : 14212.627us 00:12:19.947 99.00000% : 16002.365us 00:12:19.947 99.50000% : 40427.027us 00:12:19.947 99.90000% : 48638.766us 00:12:19.947 99.99000% : 49059.881us 00:12:19.947 99.99900% : 49059.881us 00:12:19.947 99.99990% : 49059.881us 00:12:19.947 99.99999% : 49059.881us 00:12:19.947 00:12:19.947 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:19.947 ================================================================================= 00:12:19.947 1.00000% : 8211.740us 00:12:19.947 10.00000% : 8474.937us 00:12:19.947 25.00000% : 8685.494us 00:12:19.947 50.00000% : 9001.330us 00:12:19.947 75.00000% : 9527.724us 00:12:19.947 90.00000% : 11580.659us 00:12:19.947 95.00000% : 12317.610us 00:12:19.947 98.00000% : 14317.905us 00:12:19.947 99.00000% : 15791.807us 00:12:19.947 99.50000% : 38742.567us 00:12:19.947 99.90000% : 46743.749us 00:12:19.947 99.99000% : 47164.864us 00:12:19.947 99.99900% : 47164.864us 00:12:19.947 99.99990% : 47164.864us 00:12:19.947 99.99999% : 47164.864us 00:12:19.947 00:12:19.947 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:19.947 ================================================================================= 00:12:19.947 1.00000% : 8211.740us 00:12:19.947 10.00000% : 8474.937us 00:12:19.947 25.00000% : 8685.494us 00:12:19.947 50.00000% : 9001.330us 00:12:19.947 75.00000% : 9527.724us 00:12:19.947 90.00000% : 11580.659us 00:12:19.947 95.00000% : 12317.610us 00:12:19.947 98.00000% : 14423.184us 00:12:19.947 99.00000% : 15475.971us 00:12:19.947 99.50000% : 36847.550us 00:12:19.947 99.90000% : 44638.175us 00:12:19.947 99.99000% : 45059.290us 00:12:19.947 99.99900% : 45059.290us 00:12:19.947 99.99990% : 45059.290us 00:12:19.947 99.99999% : 45059.290us 00:12:19.947 00:12:19.947 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:19.947 ================================================================================= 00:12:19.947 1.00000% : 8211.740us 00:12:19.947 10.00000% : 8474.937us 00:12:19.947 25.00000% : 8685.494us 00:12:19.947 50.00000% : 9001.330us 00:12:19.947 75.00000% : 9527.724us 00:12:19.947 90.00000% : 11580.659us 00:12:19.947 95.00000% : 12317.610us 00:12:19.947 98.00000% : 14423.184us 00:12:19.947 99.00000% : 15581.250us 00:12:19.947 99.50000% : 35163.091us 00:12:19.947 99.90000% : 42743.158us 00:12:19.947 99.99000% : 43164.273us 00:12:19.947 99.99900% : 43164.273us 00:12:19.947 99.99990% : 43164.273us 00:12:19.947 99.99999% : 43164.273us 00:12:19.947 00:12:19.947 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:19.947 ============================================================================== 00:12:19.947 Range in us Cumulative IO count 00:12:19.947 7895.904 - 7948.543: 0.0076% ( 1) 00:12:19.947 7948.543 - 8001.182: 0.0758% ( 9) 00:12:19.947 8001.182 - 8053.822: 0.2048% ( 17) 00:12:19.947 8053.822 - 8106.461: 0.6599% ( 60) 00:12:19.947 8106.461 - 8159.100: 1.5018% ( 111) 00:12:19.947 8159.100 - 8211.740: 2.7306% ( 162) 00:12:19.947 8211.740 - 8264.379: 4.5737% ( 243) 00:12:19.947 8264.379 - 8317.018: 6.5534% ( 261) 00:12:19.947 8317.018 - 8369.658: 9.1095% ( 337) 00:12:19.947 8369.658 - 8422.297: 11.8629% ( 363) 00:12:19.947 8422.297 - 8474.937: 14.9499% ( 407) 00:12:19.947 8474.937 - 8527.576: 17.9536% ( 396) 00:12:19.947 8527.576 - 8580.215: 21.2151% ( 430) 00:12:19.947 8580.215 - 8632.855: 24.7876% ( 471) 00:12:19.947 8632.855 - 8685.494: 28.2995% ( 463) 00:12:19.947 8685.494 - 8738.133: 31.8189% ( 464) 00:12:19.947 8738.133 - 8790.773: 35.5583% ( 493) 00:12:19.947 8790.773 - 8843.412: 39.3052% ( 494) 00:12:19.947 8843.412 - 8896.051: 43.2873% ( 525) 00:12:19.947 8896.051 - 8948.691: 47.0646% ( 498) 00:12:19.947 8948.691 - 9001.330: 50.9026% ( 506) 00:12:19.947 9001.330 - 9053.969: 54.8013% ( 514) 00:12:19.947 9053.969 - 9106.609: 58.3586% ( 469) 00:12:19.947 9106.609 - 9159.248: 61.5671% ( 423) 00:12:19.947 9159.248 - 9211.888: 64.2900% ( 359) 00:12:19.947 9211.888 - 9264.527: 66.7779% ( 328) 00:12:19.947 9264.527 - 9317.166: 68.9624% ( 288) 00:12:19.947 9317.166 - 9369.806: 70.6993% ( 229) 00:12:19.947 9369.806 - 9422.445: 72.2846% ( 209) 00:12:19.947 9422.445 - 9475.084: 73.5133% ( 162) 00:12:19.947 9475.084 - 9527.724: 74.6056% ( 144) 00:12:19.947 9527.724 - 9580.363: 75.5537% ( 125) 00:12:19.947 9580.363 - 9633.002: 76.2894% ( 97) 00:12:19.947 9633.002 - 9685.642: 76.9569% ( 88) 00:12:19.947 9685.642 - 9738.281: 77.6092% ( 86) 00:12:19.947 9738.281 - 9790.920: 78.2464% ( 84) 00:12:19.947 9790.920 - 9843.560: 78.7773% ( 70) 00:12:19.947 9843.560 - 9896.199: 79.2855% ( 67) 00:12:19.947 9896.199 - 9948.839: 79.7937% ( 67) 00:12:19.947 9948.839 - 10001.478: 80.1881% ( 52) 00:12:19.947 10001.478 - 10054.117: 80.5294% ( 45) 00:12:19.947 10054.117 - 10106.757: 80.9163% ( 51) 00:12:19.947 10106.757 - 10159.396: 81.2955% ( 50) 00:12:19.947 10159.396 - 10212.035: 81.6141% ( 42) 00:12:19.947 10212.035 - 10264.675: 81.9099% ( 39) 00:12:19.947 10264.675 - 10317.314: 82.2285% ( 42) 00:12:19.948 10317.314 - 10369.953: 82.5546% ( 43) 00:12:19.948 10369.953 - 10422.593: 82.8656% ( 41) 00:12:19.948 10422.593 - 10475.232: 83.1538% ( 38) 00:12:19.948 10475.232 - 10527.871: 83.3814% ( 30) 00:12:19.948 10527.871 - 10580.511: 83.6317% ( 33) 00:12:19.948 10580.511 - 10633.150: 83.9123% ( 37) 00:12:19.948 10633.150 - 10685.790: 84.1778% ( 35) 00:12:19.948 10685.790 - 10738.429: 84.4736% ( 39) 00:12:19.948 10738.429 - 10791.068: 84.8225% ( 46) 00:12:19.948 10791.068 - 10843.708: 85.1259% ( 40) 00:12:19.948 10843.708 - 10896.347: 85.4293% ( 40) 00:12:19.948 10896.347 - 10948.986: 85.7100% ( 37) 00:12:19.948 10948.986 - 11001.626: 86.0892% ( 50) 00:12:19.948 11001.626 - 11054.265: 86.4381% ( 46) 00:12:19.948 11054.265 - 11106.904: 86.8174% ( 50) 00:12:19.948 11106.904 - 11159.544: 87.2421% ( 56) 00:12:19.948 11159.544 - 11212.183: 87.6138% ( 49) 00:12:19.948 11212.183 - 11264.822: 88.0006% ( 51) 00:12:19.948 11264.822 - 11317.462: 88.3950% ( 52) 00:12:19.948 11317.462 - 11370.101: 88.8501% ( 60) 00:12:19.948 11370.101 - 11422.741: 89.1990% ( 46) 00:12:19.948 11422.741 - 11475.380: 89.5707% ( 49) 00:12:19.948 11475.380 - 11528.019: 89.9651% ( 52) 00:12:19.948 11528.019 - 11580.659: 90.3216% ( 47) 00:12:19.948 11580.659 - 11633.298: 90.6629% ( 45) 00:12:19.948 11633.298 - 11685.937: 91.0573% ( 52) 00:12:19.948 11685.937 - 11738.577: 91.4518% ( 52) 00:12:19.948 11738.577 - 11791.216: 91.8613% ( 54) 00:12:19.948 11791.216 - 11843.855: 92.2103% ( 46) 00:12:19.948 11843.855 - 11896.495: 92.6047% ( 52) 00:12:19.948 11896.495 - 11949.134: 92.9839% ( 50) 00:12:19.948 11949.134 - 12001.773: 93.3025% ( 42) 00:12:19.948 12001.773 - 12054.413: 93.6438% ( 45) 00:12:19.948 12054.413 - 12107.052: 93.9093% ( 35) 00:12:19.948 12107.052 - 12159.692: 94.1823% ( 36) 00:12:19.948 12159.692 - 12212.331: 94.3947% ( 28) 00:12:19.948 12212.331 - 12264.970: 94.6526% ( 34) 00:12:19.948 12264.970 - 12317.610: 94.8650% ( 28) 00:12:19.948 12317.610 - 12370.249: 95.0850% ( 29) 00:12:19.948 12370.249 - 12422.888: 95.2063% ( 16) 00:12:19.948 12422.888 - 12475.528: 95.2897% ( 11) 00:12:19.948 12475.528 - 12528.167: 95.3959% ( 14) 00:12:19.948 12528.167 - 12580.806: 95.4870% ( 12) 00:12:19.948 12580.806 - 12633.446: 95.5780% ( 12) 00:12:19.948 12633.446 - 12686.085: 95.6690% ( 12) 00:12:19.948 12686.085 - 12738.724: 95.7524% ( 11) 00:12:19.948 12738.724 - 12791.364: 95.8434% ( 12) 00:12:19.948 12791.364 - 12844.003: 95.9496% ( 14) 00:12:19.948 12844.003 - 12896.643: 96.0255% ( 10) 00:12:19.948 12896.643 - 12949.282: 96.1165% ( 12) 00:12:19.948 12949.282 - 13001.921: 96.2151% ( 13) 00:12:19.948 13001.921 - 13054.561: 96.3061% ( 12) 00:12:19.948 13054.561 - 13107.200: 96.3668% ( 8) 00:12:19.948 13107.200 - 13159.839: 96.4502% ( 11) 00:12:19.948 13159.839 - 13212.479: 96.5109% ( 8) 00:12:19.948 13212.479 - 13265.118: 96.5868% ( 10) 00:12:19.948 13265.118 - 13317.757: 96.6475% ( 8) 00:12:19.948 13317.757 - 13370.397: 96.7081% ( 8) 00:12:19.948 13370.397 - 13423.036: 96.7688% ( 8) 00:12:19.948 13423.036 - 13475.676: 96.8295% ( 8) 00:12:19.948 13475.676 - 13580.954: 96.9433% ( 15) 00:12:19.948 13580.954 - 13686.233: 97.0115% ( 9) 00:12:19.948 13686.233 - 13791.512: 97.1329% ( 16) 00:12:19.948 13791.512 - 13896.790: 97.2239% ( 12) 00:12:19.948 13896.790 - 14002.069: 97.3377% ( 15) 00:12:19.948 14002.069 - 14107.348: 97.5046% ( 22) 00:12:19.948 14107.348 - 14212.627: 97.6259% ( 16) 00:12:19.948 14212.627 - 14317.905: 97.7852% ( 21) 00:12:19.948 14317.905 - 14423.184: 97.9217% ( 18) 00:12:19.948 14423.184 - 14528.463: 98.0810% ( 21) 00:12:19.948 14528.463 - 14633.741: 98.2100% ( 17) 00:12:19.948 14633.741 - 14739.020: 98.3465% ( 18) 00:12:19.948 14739.020 - 14844.299: 98.4603% ( 15) 00:12:19.948 14844.299 - 14949.578: 98.5740% ( 15) 00:12:19.948 14949.578 - 15054.856: 98.6423% ( 9) 00:12:19.948 15054.856 - 15160.135: 98.7030% ( 8) 00:12:19.948 15160.135 - 15265.414: 98.7637% ( 8) 00:12:19.948 15265.414 - 15370.692: 98.8167% ( 7) 00:12:19.948 15370.692 - 15475.971: 98.8698% ( 7) 00:12:19.948 15475.971 - 15581.250: 98.9305% ( 8) 00:12:19.948 15581.250 - 15686.529: 98.9836% ( 7) 00:12:19.948 15686.529 - 15791.807: 99.0291% ( 6) 00:12:19.948 39374.239 - 39584.797: 99.0367% ( 1) 00:12:19.948 39584.797 - 39795.354: 99.0822% ( 6) 00:12:19.948 39795.354 - 40005.912: 99.1277% ( 6) 00:12:19.948 40005.912 - 40216.469: 99.1732% ( 6) 00:12:19.948 40216.469 - 40427.027: 99.2263% ( 7) 00:12:19.948 40427.027 - 40637.584: 99.2718% ( 6) 00:12:19.948 40637.584 - 40848.141: 99.3325% ( 8) 00:12:19.948 40848.141 - 41058.699: 99.3780% ( 6) 00:12:19.948 41058.699 - 41269.256: 99.4387% ( 8) 00:12:19.948 41269.256 - 41479.814: 99.4842% ( 6) 00:12:19.948 41479.814 - 41690.371: 99.5146% ( 4) 00:12:19.948 49059.881 - 49270.439: 99.5297% ( 2) 00:12:19.948 49270.439 - 49480.996: 99.5677% ( 5) 00:12:19.948 49480.996 - 49691.553: 99.6132% ( 6) 00:12:19.948 49691.553 - 49902.111: 99.6587% ( 6) 00:12:19.948 49902.111 - 50112.668: 99.7042% ( 6) 00:12:19.948 50112.668 - 50323.226: 99.7421% ( 5) 00:12:19.948 50323.226 - 50533.783: 99.7876% ( 6) 00:12:19.948 50533.783 - 50744.341: 99.8255% ( 5) 00:12:19.948 50744.341 - 50954.898: 99.8786% ( 7) 00:12:19.948 50954.898 - 51165.455: 99.9166% ( 5) 00:12:19.948 51165.455 - 51376.013: 99.9697% ( 7) 00:12:19.948 51376.013 - 51586.570: 100.0000% ( 4) 00:12:19.948 00:12:19.948 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:19.948 ============================================================================== 00:12:19.948 Range in us Cumulative IO count 00:12:19.948 7474.789 - 7527.428: 0.0455% ( 6) 00:12:19.948 7527.428 - 7580.067: 0.0607% ( 2) 00:12:19.948 7580.067 - 7632.707: 0.0834% ( 3) 00:12:19.948 7632.707 - 7685.346: 0.1138% ( 4) 00:12:19.948 7685.346 - 7737.986: 0.1593% ( 6) 00:12:19.948 7737.986 - 7790.625: 0.1972% ( 5) 00:12:19.948 7790.625 - 7843.264: 0.2275% ( 4) 00:12:19.948 7843.264 - 7895.904: 0.2503% ( 3) 00:12:19.948 7895.904 - 7948.543: 0.2806% ( 4) 00:12:19.948 7948.543 - 8001.182: 0.3034% ( 3) 00:12:19.948 8001.182 - 8053.822: 0.3717% ( 9) 00:12:19.948 8053.822 - 8106.461: 0.4854% ( 15) 00:12:19.948 8106.461 - 8159.100: 0.7433% ( 34) 00:12:19.948 8159.100 - 8211.740: 1.3805% ( 84) 00:12:19.948 8211.740 - 8264.379: 2.6699% ( 170) 00:12:19.948 8264.379 - 8317.018: 4.1945% ( 201) 00:12:19.948 8317.018 - 8369.658: 6.4624% ( 299) 00:12:19.948 8369.658 - 8422.297: 9.2309% ( 365) 00:12:19.948 8422.297 - 8474.937: 12.3180% ( 407) 00:12:19.948 8474.937 - 8527.576: 15.8146% ( 461) 00:12:19.948 8527.576 - 8580.215: 19.4554% ( 480) 00:12:19.948 8580.215 - 8632.855: 23.3086% ( 508) 00:12:19.948 8632.855 - 8685.494: 27.1845% ( 511) 00:12:19.948 8685.494 - 8738.133: 31.2424% ( 535) 00:12:19.948 8738.133 - 8790.773: 35.4824% ( 559) 00:12:19.948 8790.773 - 8843.412: 39.8589% ( 577) 00:12:19.948 8843.412 - 8896.051: 44.1748% ( 569) 00:12:19.948 8896.051 - 8948.691: 48.6195% ( 586) 00:12:19.948 8948.691 - 9001.330: 52.9050% ( 565) 00:12:19.948 9001.330 - 9053.969: 56.7885% ( 512) 00:12:19.948 9053.969 - 9106.609: 60.2776% ( 460) 00:12:19.948 9106.609 - 9159.248: 63.3950% ( 411) 00:12:19.948 9159.248 - 9211.888: 66.1408% ( 362) 00:12:19.948 9211.888 - 9264.527: 68.3783% ( 295) 00:12:19.948 9264.527 - 9317.166: 70.1608% ( 235) 00:12:19.948 9317.166 - 9369.806: 71.6247% ( 193) 00:12:19.948 9369.806 - 9422.445: 72.8459% ( 161) 00:12:19.948 9422.445 - 9475.084: 73.9912% ( 151) 00:12:19.948 9475.084 - 9527.724: 74.9317% ( 124) 00:12:19.948 9527.724 - 9580.363: 75.7737% ( 111) 00:12:19.948 9580.363 - 9633.002: 76.5701% ( 105) 00:12:19.948 9633.002 - 9685.642: 77.3058% ( 97) 00:12:19.948 9685.642 - 9738.281: 78.0036% ( 92) 00:12:19.948 9738.281 - 9790.920: 78.6029% ( 79) 00:12:19.948 9790.920 - 9843.560: 79.1110% ( 67) 00:12:19.948 9843.560 - 9896.199: 79.5358% ( 56) 00:12:19.948 9896.199 - 9948.839: 79.9150% ( 50) 00:12:19.948 9948.839 - 10001.478: 80.3322% ( 55) 00:12:19.948 10001.478 - 10054.117: 80.7721% ( 58) 00:12:19.948 10054.117 - 10106.757: 81.1514% ( 50) 00:12:19.948 10106.757 - 10159.396: 81.5079% ( 47) 00:12:19.948 10159.396 - 10212.035: 81.8265% ( 42) 00:12:19.948 10212.035 - 10264.675: 82.1147% ( 38) 00:12:19.949 10264.675 - 10317.314: 82.4408% ( 43) 00:12:19.949 10317.314 - 10369.953: 82.7594% ( 42) 00:12:19.949 10369.953 - 10422.593: 83.0780% ( 42) 00:12:19.949 10422.593 - 10475.232: 83.3131% ( 31) 00:12:19.949 10475.232 - 10527.871: 83.5103% ( 26) 00:12:19.949 10527.871 - 10580.511: 83.7227% ( 28) 00:12:19.949 10580.511 - 10633.150: 83.9427% ( 29) 00:12:19.949 10633.150 - 10685.790: 84.1399% ( 26) 00:12:19.949 10685.790 - 10738.429: 84.3522% ( 28) 00:12:19.949 10738.429 - 10791.068: 84.5722% ( 29) 00:12:19.949 10791.068 - 10843.708: 84.8453% ( 36) 00:12:19.949 10843.708 - 10896.347: 85.1562% ( 41) 00:12:19.949 10896.347 - 10948.986: 85.4976% ( 45) 00:12:19.949 10948.986 - 11001.626: 85.8465% ( 46) 00:12:19.949 11001.626 - 11054.265: 86.2333% ( 51) 00:12:19.949 11054.265 - 11106.904: 86.6353% ( 53) 00:12:19.949 11106.904 - 11159.544: 86.9918% ( 47) 00:12:19.949 11159.544 - 11212.183: 87.3711% ( 50) 00:12:19.949 11212.183 - 11264.822: 87.7958% ( 56) 00:12:19.949 11264.822 - 11317.462: 88.2964% ( 66) 00:12:19.949 11317.462 - 11370.101: 88.7515% ( 60) 00:12:19.949 11370.101 - 11422.741: 89.1990% ( 59) 00:12:19.949 11422.741 - 11475.380: 89.6845% ( 64) 00:12:19.949 11475.380 - 11528.019: 90.0941% ( 54) 00:12:19.949 11528.019 - 11580.659: 90.5188% ( 56) 00:12:19.949 11580.659 - 11633.298: 90.9436% ( 56) 00:12:19.949 11633.298 - 11685.937: 91.3683% ( 56) 00:12:19.949 11685.937 - 11738.577: 91.7627% ( 52) 00:12:19.949 11738.577 - 11791.216: 92.1572% ( 52) 00:12:19.949 11791.216 - 11843.855: 92.5137% ( 47) 00:12:19.949 11843.855 - 11896.495: 92.8246% ( 41) 00:12:19.949 11896.495 - 11949.134: 93.1660% ( 45) 00:12:19.949 11949.134 - 12001.773: 93.4769% ( 41) 00:12:19.949 12001.773 - 12054.413: 93.8107% ( 44) 00:12:19.949 12054.413 - 12107.052: 94.0610% ( 33) 00:12:19.949 12107.052 - 12159.692: 94.2658% ( 27) 00:12:19.949 12159.692 - 12212.331: 94.4857% ( 29) 00:12:19.949 12212.331 - 12264.970: 94.6602% ( 23) 00:12:19.949 12264.970 - 12317.610: 94.8195% ( 21) 00:12:19.949 12317.610 - 12370.249: 94.9408% ( 16) 00:12:19.949 12370.249 - 12422.888: 95.0622% ( 16) 00:12:19.949 12422.888 - 12475.528: 95.1684% ( 14) 00:12:19.949 12475.528 - 12528.167: 95.3049% ( 18) 00:12:19.949 12528.167 - 12580.806: 95.4339% ( 17) 00:12:19.949 12580.806 - 12633.446: 95.5552% ( 16) 00:12:19.949 12633.446 - 12686.085: 95.6766% ( 16) 00:12:19.949 12686.085 - 12738.724: 95.7904% ( 15) 00:12:19.949 12738.724 - 12791.364: 95.9041% ( 15) 00:12:19.949 12791.364 - 12844.003: 95.9951% ( 12) 00:12:19.949 12844.003 - 12896.643: 96.1013% ( 14) 00:12:19.949 12896.643 - 12949.282: 96.1924% ( 12) 00:12:19.949 12949.282 - 13001.921: 96.2758% ( 11) 00:12:19.949 13001.921 - 13054.561: 96.3365% ( 8) 00:12:19.949 13054.561 - 13107.200: 96.4199% ( 11) 00:12:19.949 13107.200 - 13159.839: 96.4806% ( 8) 00:12:19.949 13159.839 - 13212.479: 96.5564% ( 10) 00:12:19.949 13212.479 - 13265.118: 96.6323% ( 10) 00:12:19.949 13265.118 - 13317.757: 96.6930% ( 8) 00:12:19.949 13317.757 - 13370.397: 96.7612% ( 9) 00:12:19.949 13370.397 - 13423.036: 96.8371% ( 10) 00:12:19.949 13423.036 - 13475.676: 96.9129% ( 10) 00:12:19.949 13475.676 - 13580.954: 97.0570% ( 19) 00:12:19.949 13580.954 - 13686.233: 97.2087% ( 20) 00:12:19.949 13686.233 - 13791.512: 97.3832% ( 23) 00:12:19.949 13791.512 - 13896.790: 97.5501% ( 22) 00:12:19.949 13896.790 - 14002.069: 97.6638% ( 15) 00:12:19.949 14002.069 - 14107.348: 97.7928% ( 17) 00:12:19.949 14107.348 - 14212.627: 97.9293% ( 18) 00:12:19.949 14212.627 - 14317.905: 98.0583% ( 17) 00:12:19.949 14317.905 - 14423.184: 98.1417% ( 11) 00:12:19.949 14423.184 - 14528.463: 98.2175% ( 10) 00:12:19.949 14528.463 - 14633.741: 98.2782% ( 8) 00:12:19.949 14633.741 - 14739.020: 98.3465% ( 9) 00:12:19.949 14739.020 - 14844.299: 98.4603% ( 15) 00:12:19.949 14844.299 - 14949.578: 98.5361% ( 10) 00:12:19.949 14949.578 - 15054.856: 98.5968% ( 8) 00:12:19.949 15054.856 - 15160.135: 98.6423% ( 6) 00:12:19.949 15160.135 - 15265.414: 98.7106% ( 9) 00:12:19.949 15265.414 - 15370.692: 98.7485% ( 5) 00:12:19.949 15370.692 - 15475.971: 98.7788% ( 4) 00:12:19.949 15475.971 - 15581.250: 98.8167% ( 5) 00:12:19.949 15581.250 - 15686.529: 98.8471% ( 4) 00:12:19.949 15686.529 - 15791.807: 98.8774% ( 4) 00:12:19.949 15791.807 - 15897.086: 98.9078% ( 4) 00:12:19.949 15897.086 - 16002.365: 98.9457% ( 5) 00:12:19.949 16002.365 - 16107.643: 98.9760% ( 4) 00:12:19.949 16107.643 - 16212.922: 99.0140% ( 5) 00:12:19.949 16212.922 - 16318.201: 99.0291% ( 2) 00:12:19.949 39163.682 - 39374.239: 99.0746% ( 6) 00:12:19.949 39374.239 - 39584.797: 99.1201% ( 6) 00:12:19.949 39584.797 - 39795.354: 99.1657% ( 6) 00:12:19.949 39795.354 - 40005.912: 99.2112% ( 6) 00:12:19.949 40005.912 - 40216.469: 99.2567% ( 6) 00:12:19.949 40216.469 - 40427.027: 99.3098% ( 7) 00:12:19.949 40427.027 - 40637.584: 99.3553% ( 6) 00:12:19.949 40637.584 - 40848.141: 99.4008% ( 6) 00:12:19.949 40848.141 - 41058.699: 99.4463% ( 6) 00:12:19.949 41058.699 - 41269.256: 99.4994% ( 7) 00:12:19.949 41269.256 - 41479.814: 99.5146% ( 2) 00:12:19.949 47585.979 - 47796.537: 99.5601% ( 6) 00:12:19.949 47796.537 - 48007.094: 99.6056% ( 6) 00:12:19.949 48007.094 - 48217.651: 99.6435% ( 5) 00:12:19.949 48217.651 - 48428.209: 99.6890% ( 6) 00:12:19.949 48428.209 - 48638.766: 99.7345% ( 6) 00:12:19.949 48638.766 - 48849.324: 99.7800% ( 6) 00:12:19.949 48849.324 - 49059.881: 99.8331% ( 7) 00:12:19.949 49059.881 - 49270.439: 99.8786% ( 6) 00:12:19.949 49270.439 - 49480.996: 99.9166% ( 5) 00:12:19.949 49480.996 - 49691.553: 99.9697% ( 7) 00:12:19.949 49691.553 - 49902.111: 100.0000% ( 4) 00:12:19.949 00:12:19.949 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:19.949 ============================================================================== 00:12:19.949 Range in us Cumulative IO count 00:12:19.949 8001.182 - 8053.822: 0.0076% ( 1) 00:12:19.949 8053.822 - 8106.461: 0.1214% ( 15) 00:12:19.949 8106.461 - 8159.100: 0.4475% ( 43) 00:12:19.949 8159.100 - 8211.740: 1.3046% ( 113) 00:12:19.949 8211.740 - 8264.379: 2.6016% ( 171) 00:12:19.949 8264.379 - 8317.018: 4.2400% ( 216) 00:12:19.949 8317.018 - 8369.658: 6.3562% ( 279) 00:12:19.949 8369.658 - 8422.297: 9.0488% ( 355) 00:12:19.949 8422.297 - 8474.937: 12.1663% ( 411) 00:12:19.949 8474.937 - 8527.576: 15.6478% ( 459) 00:12:19.949 8527.576 - 8580.215: 19.4023% ( 495) 00:12:19.949 8580.215 - 8632.855: 23.2479% ( 507) 00:12:19.949 8632.855 - 8685.494: 27.1769% ( 518) 00:12:19.949 8685.494 - 8738.133: 31.3183% ( 546) 00:12:19.949 8738.133 - 8790.773: 35.5507% ( 558) 00:12:19.949 8790.773 - 8843.412: 39.8210% ( 563) 00:12:19.949 8843.412 - 8896.051: 44.2582% ( 585) 00:12:19.949 8896.051 - 8948.691: 48.6726% ( 582) 00:12:19.949 8948.691 - 9001.330: 52.9581% ( 565) 00:12:19.949 9001.330 - 9053.969: 56.7582% ( 501) 00:12:19.949 9053.969 - 9106.609: 60.2852% ( 465) 00:12:19.949 9106.609 - 9159.248: 63.3268% ( 401) 00:12:19.949 9159.248 - 9211.888: 66.0346% ( 357) 00:12:19.949 9211.888 - 9264.527: 68.2342% ( 290) 00:12:19.949 9264.527 - 9317.166: 70.0546% ( 240) 00:12:19.949 9317.166 - 9369.806: 71.5261% ( 194) 00:12:19.949 9369.806 - 9422.445: 72.7549% ( 162) 00:12:19.949 9422.445 - 9475.084: 73.8926% ( 150) 00:12:19.949 9475.084 - 9527.724: 74.8635% ( 128) 00:12:19.949 9527.724 - 9580.363: 75.7812% ( 121) 00:12:19.949 9580.363 - 9633.002: 76.5777% ( 105) 00:12:19.949 9633.002 - 9685.642: 77.2831% ( 93) 00:12:19.949 9685.642 - 9738.281: 77.8444% ( 74) 00:12:19.949 9738.281 - 9790.920: 78.3829% ( 71) 00:12:19.949 9790.920 - 9843.560: 78.8759% ( 65) 00:12:19.949 9843.560 - 9896.199: 79.3538% ( 63) 00:12:19.949 9896.199 - 9948.839: 79.7861% ( 57) 00:12:19.949 9948.839 - 10001.478: 80.2109% ( 56) 00:12:19.949 10001.478 - 10054.117: 80.5901% ( 50) 00:12:19.949 10054.117 - 10106.757: 81.0149% ( 56) 00:12:19.949 10106.757 - 10159.396: 81.3865% ( 49) 00:12:19.949 10159.396 - 10212.035: 81.7279% ( 45) 00:12:19.949 10212.035 - 10264.675: 82.1223% ( 52) 00:12:19.949 10264.675 - 10317.314: 82.4788% ( 47) 00:12:19.949 10317.314 - 10369.953: 82.8201% ( 45) 00:12:19.949 10369.953 - 10422.593: 83.0856% ( 35) 00:12:19.949 10422.593 - 10475.232: 83.3283% ( 32) 00:12:19.949 10475.232 - 10527.871: 83.5027% ( 23) 00:12:19.949 10527.871 - 10580.511: 83.6848% ( 24) 00:12:19.949 10580.511 - 10633.150: 83.8820% ( 26) 00:12:19.949 10633.150 - 10685.790: 84.0716% ( 25) 00:12:19.949 10685.790 - 10738.429: 84.2840% ( 28) 00:12:19.949 10738.429 - 10791.068: 84.5191% ( 31) 00:12:19.949 10791.068 - 10843.708: 84.7770% ( 34) 00:12:19.949 10843.708 - 10896.347: 85.0804% ( 40) 00:12:19.949 10896.347 - 10948.986: 85.3610% ( 37) 00:12:19.949 10948.986 - 11001.626: 85.6872% ( 43) 00:12:19.949 11001.626 - 11054.265: 86.0361% ( 46) 00:12:19.949 11054.265 - 11106.904: 86.4305% ( 52) 00:12:19.949 11106.904 - 11159.544: 86.9084% ( 63) 00:12:19.949 11159.544 - 11212.183: 87.3711% ( 61) 00:12:19.949 11212.183 - 11264.822: 87.8186% ( 59) 00:12:19.949 11264.822 - 11317.462: 88.3040% ( 64) 00:12:19.949 11317.462 - 11370.101: 88.7515% ( 59) 00:12:19.949 11370.101 - 11422.741: 89.2597% ( 67) 00:12:19.949 11422.741 - 11475.380: 89.7451% ( 64) 00:12:19.949 11475.380 - 11528.019: 90.1699% ( 56) 00:12:19.949 11528.019 - 11580.659: 90.5947% ( 56) 00:12:19.949 11580.659 - 11633.298: 91.0042% ( 54) 00:12:19.949 11633.298 - 11685.937: 91.4669% ( 61) 00:12:19.949 11685.937 - 11738.577: 91.8765% ( 54) 00:12:19.949 11738.577 - 11791.216: 92.3089% ( 57) 00:12:19.949 11791.216 - 11843.855: 92.7033% ( 52) 00:12:19.949 11843.855 - 11896.495: 93.0370% ( 44) 00:12:19.949 11896.495 - 11949.134: 93.4314% ( 52) 00:12:19.950 11949.134 - 12001.773: 93.7955% ( 48) 00:12:19.950 12001.773 - 12054.413: 94.0762% ( 37) 00:12:19.950 12054.413 - 12107.052: 94.3340% ( 34) 00:12:19.950 12107.052 - 12159.692: 94.5692% ( 31) 00:12:19.950 12159.692 - 12212.331: 94.7512% ( 24) 00:12:19.950 12212.331 - 12264.970: 94.9105% ( 21) 00:12:19.950 12264.970 - 12317.610: 95.0622% ( 20) 00:12:19.950 12317.610 - 12370.249: 95.2367% ( 23) 00:12:19.950 12370.249 - 12422.888: 95.3732% ( 18) 00:12:19.950 12422.888 - 12475.528: 95.5021% ( 17) 00:12:19.950 12475.528 - 12528.167: 95.6235% ( 16) 00:12:19.950 12528.167 - 12580.806: 95.7448% ( 16) 00:12:19.950 12580.806 - 12633.446: 95.8434% ( 13) 00:12:19.950 12633.446 - 12686.085: 95.9269% ( 11) 00:12:19.950 12686.085 - 12738.724: 96.0179% ( 12) 00:12:19.950 12738.724 - 12791.364: 96.1317% ( 15) 00:12:19.950 12791.364 - 12844.003: 96.2303% ( 13) 00:12:19.950 12844.003 - 12896.643: 96.3061% ( 10) 00:12:19.950 12896.643 - 12949.282: 96.3668% ( 8) 00:12:19.950 12949.282 - 13001.921: 96.4578% ( 12) 00:12:19.950 13001.921 - 13054.561: 96.5185% ( 8) 00:12:19.950 13054.561 - 13107.200: 96.5944% ( 10) 00:12:19.950 13107.200 - 13159.839: 96.6930% ( 13) 00:12:19.950 13159.839 - 13212.479: 96.7840% ( 12) 00:12:19.950 13212.479 - 13265.118: 96.8750% ( 12) 00:12:19.950 13265.118 - 13317.757: 96.9584% ( 11) 00:12:19.950 13317.757 - 13370.397: 97.0495% ( 12) 00:12:19.950 13370.397 - 13423.036: 97.1405% ( 12) 00:12:19.950 13423.036 - 13475.676: 97.2163% ( 10) 00:12:19.950 13475.676 - 13580.954: 97.4059% ( 25) 00:12:19.950 13580.954 - 13686.233: 97.5576% ( 20) 00:12:19.950 13686.233 - 13791.512: 97.6562% ( 13) 00:12:19.950 13791.512 - 13896.790: 97.7549% ( 13) 00:12:19.950 13896.790 - 14002.069: 97.8535% ( 13) 00:12:19.950 14002.069 - 14107.348: 97.9672% ( 15) 00:12:19.950 14107.348 - 14212.627: 98.0886% ( 16) 00:12:19.950 14212.627 - 14317.905: 98.1796% ( 12) 00:12:19.950 14317.905 - 14423.184: 98.2782% ( 13) 00:12:19.950 14423.184 - 14528.463: 98.3844% ( 14) 00:12:19.950 14528.463 - 14633.741: 98.4678% ( 11) 00:12:19.950 14633.741 - 14739.020: 98.5664% ( 13) 00:12:19.950 14739.020 - 14844.299: 98.6423% ( 10) 00:12:19.950 14844.299 - 14949.578: 98.7030% ( 8) 00:12:19.950 14949.578 - 15054.856: 98.7712% ( 9) 00:12:19.950 15054.856 - 15160.135: 98.8319% ( 8) 00:12:19.950 15160.135 - 15265.414: 98.8547% ( 3) 00:12:19.950 15265.414 - 15370.692: 98.8774% ( 3) 00:12:19.950 15370.692 - 15475.971: 98.9002% ( 3) 00:12:19.950 15475.971 - 15581.250: 98.9229% ( 3) 00:12:19.950 15581.250 - 15686.529: 98.9457% ( 3) 00:12:19.950 15686.529 - 15791.807: 98.9684% ( 3) 00:12:19.950 15791.807 - 15897.086: 98.9912% ( 3) 00:12:19.950 15897.086 - 16002.365: 99.0064% ( 2) 00:12:19.950 16002.365 - 16107.643: 99.0215% ( 2) 00:12:19.950 16107.643 - 16212.922: 99.0291% ( 1) 00:12:19.950 38110.895 - 38321.452: 99.0595% ( 4) 00:12:19.950 38321.452 - 38532.010: 99.1050% ( 6) 00:12:19.950 38532.010 - 38742.567: 99.1429% ( 5) 00:12:19.950 38742.567 - 38953.124: 99.1884% ( 6) 00:12:19.950 38953.124 - 39163.682: 99.2339% ( 6) 00:12:19.950 39163.682 - 39374.239: 99.2794% ( 6) 00:12:19.950 39374.239 - 39584.797: 99.3249% ( 6) 00:12:19.950 39584.797 - 39795.354: 99.3704% ( 6) 00:12:19.950 39795.354 - 40005.912: 99.4160% ( 6) 00:12:19.950 40005.912 - 40216.469: 99.4691% ( 7) 00:12:19.950 40216.469 - 40427.027: 99.5146% ( 6) 00:12:19.950 46533.192 - 46743.749: 99.5221% ( 1) 00:12:19.950 46743.749 - 46954.307: 99.5677% ( 6) 00:12:19.950 46954.307 - 47164.864: 99.6208% ( 7) 00:12:19.950 47164.864 - 47375.422: 99.6587% ( 5) 00:12:19.950 47375.422 - 47585.979: 99.7042% ( 6) 00:12:19.950 47585.979 - 47796.537: 99.7497% ( 6) 00:12:19.950 47796.537 - 48007.094: 99.8028% ( 7) 00:12:19.950 48007.094 - 48217.651: 99.8407% ( 5) 00:12:19.950 48217.651 - 48428.209: 99.8938% ( 7) 00:12:19.950 48428.209 - 48638.766: 99.9317% ( 5) 00:12:19.950 48638.766 - 48849.324: 99.9848% ( 7) 00:12:19.950 48849.324 - 49059.881: 100.0000% ( 2) 00:12:19.950 00:12:19.950 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:19.950 ============================================================================== 00:12:19.950 Range in us Cumulative IO count 00:12:19.950 8053.822 - 8106.461: 0.0303% ( 4) 00:12:19.950 8106.461 - 8159.100: 0.3641% ( 44) 00:12:19.950 8159.100 - 8211.740: 1.1529% ( 104) 00:12:19.950 8211.740 - 8264.379: 2.3741% ( 161) 00:12:19.950 8264.379 - 8317.018: 3.9518% ( 208) 00:12:19.950 8317.018 - 8369.658: 6.3638% ( 318) 00:12:19.950 8369.658 - 8422.297: 9.0337% ( 352) 00:12:19.950 8422.297 - 8474.937: 12.1359% ( 409) 00:12:19.950 8474.937 - 8527.576: 15.5340% ( 448) 00:12:19.950 8527.576 - 8580.215: 19.3416% ( 502) 00:12:19.950 8580.215 - 8632.855: 23.2555% ( 516) 00:12:19.950 8632.855 - 8685.494: 27.1541% ( 514) 00:12:19.950 8685.494 - 8738.133: 31.3258% ( 550) 00:12:19.950 8738.133 - 8790.773: 35.5583% ( 558) 00:12:19.950 8790.773 - 8843.412: 39.9272% ( 576) 00:12:19.950 8843.412 - 8896.051: 44.4175% ( 592) 00:12:19.950 8896.051 - 8948.691: 48.9381% ( 596) 00:12:19.950 8948.691 - 9001.330: 53.2160% ( 564) 00:12:19.950 9001.330 - 9053.969: 57.0768% ( 509) 00:12:19.950 9053.969 - 9106.609: 60.5658% ( 460) 00:12:19.950 9106.609 - 9159.248: 63.7439% ( 419) 00:12:19.950 9159.248 - 9211.888: 66.3607% ( 345) 00:12:19.950 9211.888 - 9264.527: 68.5073% ( 283) 00:12:19.950 9264.527 - 9317.166: 70.3883% ( 248) 00:12:19.950 9317.166 - 9369.806: 71.8750% ( 196) 00:12:19.950 9369.806 - 9422.445: 73.1189% ( 164) 00:12:19.950 9422.445 - 9475.084: 74.1657% ( 138) 00:12:19.950 9475.084 - 9527.724: 75.1062% ( 124) 00:12:19.950 9527.724 - 9580.363: 75.9026% ( 105) 00:12:19.950 9580.363 - 9633.002: 76.6080% ( 93) 00:12:19.950 9633.002 - 9685.642: 77.1769% ( 75) 00:12:19.950 9685.642 - 9738.281: 77.7837% ( 80) 00:12:19.950 9738.281 - 9790.920: 78.3146% ( 70) 00:12:19.950 9790.920 - 9843.560: 78.8076% ( 65) 00:12:19.950 9843.560 - 9896.199: 79.2931% ( 64) 00:12:19.950 9896.199 - 9948.839: 79.7482% ( 60) 00:12:19.950 9948.839 - 10001.478: 80.1805% ( 57) 00:12:19.950 10001.478 - 10054.117: 80.5901% ( 54) 00:12:19.950 10054.117 - 10106.757: 80.9845% ( 52) 00:12:19.950 10106.757 - 10159.396: 81.3410% ( 47) 00:12:19.950 10159.396 - 10212.035: 81.6975% ( 47) 00:12:19.950 10212.035 - 10264.675: 82.0009% ( 40) 00:12:19.950 10264.675 - 10317.314: 82.3271% ( 43) 00:12:19.950 10317.314 - 10369.953: 82.6380% ( 41) 00:12:19.950 10369.953 - 10422.593: 82.9263% ( 38) 00:12:19.950 10422.593 - 10475.232: 83.1690% ( 32) 00:12:19.950 10475.232 - 10527.871: 83.4421% ( 36) 00:12:19.950 10527.871 - 10580.511: 83.6468% ( 27) 00:12:19.950 10580.511 - 10633.150: 83.8744% ( 30) 00:12:19.950 10633.150 - 10685.790: 84.1475% ( 36) 00:12:19.950 10685.790 - 10738.429: 84.4660% ( 42) 00:12:19.950 10738.429 - 10791.068: 84.7239% ( 34) 00:12:19.950 10791.068 - 10843.708: 84.9059% ( 24) 00:12:19.950 10843.708 - 10896.347: 85.1183% ( 28) 00:12:19.950 10896.347 - 10948.986: 85.3610% ( 32) 00:12:19.950 10948.986 - 11001.626: 85.6493% ( 38) 00:12:19.950 11001.626 - 11054.265: 86.0209% ( 49) 00:12:19.950 11054.265 - 11106.904: 86.3850% ( 48) 00:12:19.950 11106.904 - 11159.544: 86.7946% ( 54) 00:12:19.950 11159.544 - 11212.183: 87.2118% ( 55) 00:12:19.950 11212.183 - 11264.822: 87.6365% ( 56) 00:12:19.950 11264.822 - 11317.462: 88.0916% ( 60) 00:12:19.950 11317.462 - 11370.101: 88.5619% ( 62) 00:12:19.950 11370.101 - 11422.741: 89.0473% ( 64) 00:12:19.950 11422.741 - 11475.380: 89.5024% ( 60) 00:12:19.950 11475.380 - 11528.019: 89.9424% ( 58) 00:12:19.950 11528.019 - 11580.659: 90.3899% ( 59) 00:12:19.950 11580.659 - 11633.298: 90.8298% ( 58) 00:12:19.950 11633.298 - 11685.937: 91.2166% ( 51) 00:12:19.950 11685.937 - 11738.577: 91.6338% ( 55) 00:12:19.950 11738.577 - 11791.216: 92.0206% ( 51) 00:12:19.950 11791.216 - 11843.855: 92.4302% ( 54) 00:12:19.950 11843.855 - 11896.495: 92.8626% ( 57) 00:12:19.950 11896.495 - 11949.134: 93.2115% ( 46) 00:12:19.950 11949.134 - 12001.773: 93.5680% ( 47) 00:12:19.950 12001.773 - 12054.413: 93.8941% ( 43) 00:12:19.950 12054.413 - 12107.052: 94.1748% ( 37) 00:12:19.950 12107.052 - 12159.692: 94.4478% ( 36) 00:12:19.950 12159.692 - 12212.331: 94.6829% ( 31) 00:12:19.950 12212.331 - 12264.970: 94.8802% ( 26) 00:12:19.950 12264.970 - 12317.610: 95.0470% ( 22) 00:12:19.950 12317.610 - 12370.249: 95.1987% ( 20) 00:12:19.950 12370.249 - 12422.888: 95.3201% ( 16) 00:12:19.950 12422.888 - 12475.528: 95.4414% ( 16) 00:12:19.950 12475.528 - 12528.167: 95.5552% ( 15) 00:12:19.950 12528.167 - 12580.806: 95.6917% ( 18) 00:12:19.950 12580.806 - 12633.446: 95.8207% ( 17) 00:12:19.950 12633.446 - 12686.085: 95.9421% ( 16) 00:12:19.950 12686.085 - 12738.724: 96.0255% ( 11) 00:12:19.950 12738.724 - 12791.364: 96.1393% ( 15) 00:12:19.950 12791.364 - 12844.003: 96.2151% ( 10) 00:12:19.950 12844.003 - 12896.643: 96.2834% ( 9) 00:12:19.950 12896.643 - 12949.282: 96.3592% ( 10) 00:12:19.950 12949.282 - 13001.921: 96.4351% ( 10) 00:12:19.950 13001.921 - 13054.561: 96.5033% ( 9) 00:12:19.950 13054.561 - 13107.200: 96.5868% ( 11) 00:12:19.950 13107.200 - 13159.839: 96.6550% ( 9) 00:12:19.950 13159.839 - 13212.479: 96.7385% ( 11) 00:12:19.950 13212.479 - 13265.118: 96.8219% ( 11) 00:12:19.950 13265.118 - 13317.757: 96.8902% ( 9) 00:12:19.950 13317.757 - 13370.397: 96.9584% ( 9) 00:12:19.950 13370.397 - 13423.036: 97.0191% ( 8) 00:12:19.950 13423.036 - 13475.676: 97.0950% ( 10) 00:12:19.950 13475.676 - 13580.954: 97.2618% ( 22) 00:12:19.951 13580.954 - 13686.233: 97.4135% ( 20) 00:12:19.951 13686.233 - 13791.512: 97.5425% ( 17) 00:12:19.951 13791.512 - 13896.790: 97.6562% ( 15) 00:12:19.951 13896.790 - 14002.069: 97.7549% ( 13) 00:12:19.951 14002.069 - 14107.348: 97.8838% ( 17) 00:12:19.951 14107.348 - 14212.627: 97.9900% ( 14) 00:12:19.951 14212.627 - 14317.905: 98.0886% ( 13) 00:12:19.951 14317.905 - 14423.184: 98.1948% ( 14) 00:12:19.951 14423.184 - 14528.463: 98.2782% ( 11) 00:12:19.951 14528.463 - 14633.741: 98.3768% ( 13) 00:12:19.951 14633.741 - 14739.020: 98.4603% ( 11) 00:12:19.951 14739.020 - 14844.299: 98.5664% ( 14) 00:12:19.951 14844.299 - 14949.578: 98.6499% ( 11) 00:12:19.951 14949.578 - 15054.856: 98.7409% ( 12) 00:12:19.951 15054.856 - 15160.135: 98.8319% ( 12) 00:12:19.951 15160.135 - 15265.414: 98.8850% ( 7) 00:12:19.951 15265.414 - 15370.692: 98.9078% ( 3) 00:12:19.951 15370.692 - 15475.971: 98.9305% ( 3) 00:12:19.951 15475.971 - 15581.250: 98.9533% ( 3) 00:12:19.951 15581.250 - 15686.529: 98.9836% ( 4) 00:12:19.951 15686.529 - 15791.807: 99.0064% ( 3) 00:12:19.951 15791.807 - 15897.086: 99.0291% ( 3) 00:12:19.951 36215.878 - 36426.435: 99.0367% ( 1) 00:12:19.951 36426.435 - 36636.993: 99.0822% ( 6) 00:12:19.951 36636.993 - 36847.550: 99.1277% ( 6) 00:12:19.951 36847.550 - 37058.108: 99.1657% ( 5) 00:12:19.951 37058.108 - 37268.665: 99.2112% ( 6) 00:12:19.951 37268.665 - 37479.222: 99.2567% ( 6) 00:12:19.951 37479.222 - 37689.780: 99.2946% ( 5) 00:12:19.951 37689.780 - 37900.337: 99.3477% ( 7) 00:12:19.951 37900.337 - 38110.895: 99.3932% ( 6) 00:12:19.951 38110.895 - 38321.452: 99.4387% ( 6) 00:12:19.951 38321.452 - 38532.010: 99.4766% ( 5) 00:12:19.951 38532.010 - 38742.567: 99.5146% ( 5) 00:12:19.951 44638.175 - 44848.733: 99.5297% ( 2) 00:12:19.951 44848.733 - 45059.290: 99.5752% ( 6) 00:12:19.951 45059.290 - 45269.847: 99.6208% ( 6) 00:12:19.951 45269.847 - 45480.405: 99.6587% ( 5) 00:12:19.951 45480.405 - 45690.962: 99.7118% ( 7) 00:12:19.951 45690.962 - 45901.520: 99.7573% ( 6) 00:12:19.951 45901.520 - 46112.077: 99.8028% ( 6) 00:12:19.951 46112.077 - 46322.635: 99.8483% ( 6) 00:12:19.951 46322.635 - 46533.192: 99.8938% ( 6) 00:12:19.951 46533.192 - 46743.749: 99.9393% ( 6) 00:12:19.951 46743.749 - 46954.307: 99.9848% ( 6) 00:12:19.951 46954.307 - 47164.864: 100.0000% ( 2) 00:12:19.951 00:12:19.951 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:19.951 ============================================================================== 00:12:19.951 Range in us Cumulative IO count 00:12:19.951 8001.182 - 8053.822: 0.0076% ( 1) 00:12:19.951 8053.822 - 8106.461: 0.0607% ( 7) 00:12:19.951 8106.461 - 8159.100: 0.3641% ( 40) 00:12:19.951 8159.100 - 8211.740: 1.0391% ( 89) 00:12:19.951 8211.740 - 8264.379: 2.2603% ( 161) 00:12:19.951 8264.379 - 8317.018: 3.9518% ( 223) 00:12:19.951 8317.018 - 8369.658: 6.1514% ( 290) 00:12:19.951 8369.658 - 8422.297: 8.8137% ( 351) 00:12:19.951 8422.297 - 8474.937: 11.9539% ( 414) 00:12:19.951 8474.937 - 8527.576: 15.4202% ( 457) 00:12:19.951 8527.576 - 8580.215: 19.0686% ( 481) 00:12:19.951 8580.215 - 8632.855: 22.9976% ( 518) 00:12:19.951 8632.855 - 8685.494: 27.0328% ( 532) 00:12:19.951 8685.494 - 8738.133: 31.2272% ( 553) 00:12:19.951 8738.133 - 8790.773: 35.5962% ( 576) 00:12:19.951 8790.773 - 8843.412: 40.0258% ( 584) 00:12:19.951 8843.412 - 8896.051: 44.4630% ( 585) 00:12:19.951 8896.051 - 8948.691: 48.9988% ( 598) 00:12:19.951 8948.691 - 9001.330: 53.2388% ( 559) 00:12:19.951 9001.330 - 9053.969: 57.1905% ( 521) 00:12:19.951 9053.969 - 9106.609: 60.6948% ( 462) 00:12:19.951 9106.609 - 9159.248: 63.8274% ( 413) 00:12:19.951 9159.248 - 9211.888: 66.4442% ( 345) 00:12:19.951 9211.888 - 9264.527: 68.6286% ( 288) 00:12:19.951 9264.527 - 9317.166: 70.4718% ( 243) 00:12:19.951 9317.166 - 9369.806: 72.0343% ( 206) 00:12:19.951 9369.806 - 9422.445: 73.3617% ( 175) 00:12:19.951 9422.445 - 9475.084: 74.4463% ( 143) 00:12:19.951 9475.084 - 9527.724: 75.3489% ( 119) 00:12:19.951 9527.724 - 9580.363: 76.1377% ( 104) 00:12:19.951 9580.363 - 9633.002: 76.8507% ( 94) 00:12:19.951 9633.002 - 9685.642: 77.5258% ( 89) 00:12:19.951 9685.642 - 9738.281: 78.1022% ( 76) 00:12:19.951 9738.281 - 9790.920: 78.6408% ( 71) 00:12:19.951 9790.920 - 9843.560: 79.1035% ( 61) 00:12:19.951 9843.560 - 9896.199: 79.5130% ( 54) 00:12:19.951 9896.199 - 9948.839: 79.8999% ( 51) 00:12:19.951 9948.839 - 10001.478: 80.2715% ( 49) 00:12:19.951 10001.478 - 10054.117: 80.6129% ( 45) 00:12:19.951 10054.117 - 10106.757: 80.9011% ( 38) 00:12:19.951 10106.757 - 10159.396: 81.2652% ( 48) 00:12:19.951 10159.396 - 10212.035: 81.6444% ( 50) 00:12:19.951 10212.035 - 10264.675: 82.0009% ( 47) 00:12:19.951 10264.675 - 10317.314: 82.3346% ( 44) 00:12:19.951 10317.314 - 10369.953: 82.6229% ( 38) 00:12:19.951 10369.953 - 10422.593: 82.8504% ( 30) 00:12:19.951 10422.593 - 10475.232: 83.1235% ( 36) 00:12:19.951 10475.232 - 10527.871: 83.4345% ( 41) 00:12:19.951 10527.871 - 10580.511: 83.6924% ( 34) 00:12:19.951 10580.511 - 10633.150: 83.9275% ( 31) 00:12:19.951 10633.150 - 10685.790: 84.1626% ( 31) 00:12:19.951 10685.790 - 10738.429: 84.4129% ( 33) 00:12:19.951 10738.429 - 10791.068: 84.7012% ( 38) 00:12:19.951 10791.068 - 10843.708: 84.9515% ( 33) 00:12:19.951 10843.708 - 10896.347: 85.1562% ( 27) 00:12:19.951 10896.347 - 10948.986: 85.4141% ( 34) 00:12:19.951 10948.986 - 11001.626: 85.7100% ( 39) 00:12:19.951 11001.626 - 11054.265: 86.0361% ( 43) 00:12:19.951 11054.265 - 11106.904: 86.3926% ( 47) 00:12:19.951 11106.904 - 11159.544: 86.7718% ( 50) 00:12:19.951 11159.544 - 11212.183: 87.1890% ( 55) 00:12:19.951 11212.183 - 11264.822: 87.5986% ( 54) 00:12:19.951 11264.822 - 11317.462: 88.0234% ( 56) 00:12:19.951 11317.462 - 11370.101: 88.4481% ( 56) 00:12:19.951 11370.101 - 11422.741: 88.8501% ( 53) 00:12:19.951 11422.741 - 11475.380: 89.2825% ( 57) 00:12:19.951 11475.380 - 11528.019: 89.7224% ( 58) 00:12:19.951 11528.019 - 11580.659: 90.1699% ( 59) 00:12:19.951 11580.659 - 11633.298: 90.6098% ( 58) 00:12:19.951 11633.298 - 11685.937: 91.0953% ( 64) 00:12:19.951 11685.937 - 11738.577: 91.5352% ( 58) 00:12:19.951 11738.577 - 11791.216: 91.9372% ( 53) 00:12:19.951 11791.216 - 11843.855: 92.3392% ( 53) 00:12:19.951 11843.855 - 11896.495: 92.7412% ( 53) 00:12:19.951 11896.495 - 11949.134: 93.1660% ( 56) 00:12:19.951 11949.134 - 12001.773: 93.5376% ( 49) 00:12:19.951 12001.773 - 12054.413: 93.9245% ( 51) 00:12:19.951 12054.413 - 12107.052: 94.2203% ( 39) 00:12:19.951 12107.052 - 12159.692: 94.5009% ( 37) 00:12:19.951 12159.692 - 12212.331: 94.7209% ( 29) 00:12:19.951 12212.331 - 12264.970: 94.9181% ( 26) 00:12:19.951 12264.970 - 12317.610: 95.1305% ( 28) 00:12:19.951 12317.610 - 12370.249: 95.3049% ( 23) 00:12:19.951 12370.249 - 12422.888: 95.4490% ( 19) 00:12:19.951 12422.888 - 12475.528: 95.5931% ( 19) 00:12:19.951 12475.528 - 12528.167: 95.6766% ( 11) 00:12:19.951 12528.167 - 12580.806: 95.7600% ( 11) 00:12:19.951 12580.806 - 12633.446: 95.8510% ( 12) 00:12:19.951 12633.446 - 12686.085: 95.9269% ( 10) 00:12:19.951 12686.085 - 12738.724: 96.0103% ( 11) 00:12:19.951 12738.724 - 12791.364: 96.1089% ( 13) 00:12:19.951 12791.364 - 12844.003: 96.1999% ( 12) 00:12:19.951 12844.003 - 12896.643: 96.2758% ( 10) 00:12:19.951 12896.643 - 12949.282: 96.3516% ( 10) 00:12:19.951 12949.282 - 13001.921: 96.4199% ( 9) 00:12:19.951 13001.921 - 13054.561: 96.4806% ( 8) 00:12:19.951 13054.561 - 13107.200: 96.5413% ( 8) 00:12:19.951 13107.200 - 13159.839: 96.6247% ( 11) 00:12:19.951 13159.839 - 13212.479: 96.7005% ( 10) 00:12:19.951 13212.479 - 13265.118: 96.7764% ( 10) 00:12:19.951 13265.118 - 13317.757: 96.8295% ( 7) 00:12:19.951 13317.757 - 13370.397: 96.8750% ( 6) 00:12:19.951 13370.397 - 13423.036: 96.9205% ( 6) 00:12:19.951 13423.036 - 13475.676: 96.9660% ( 6) 00:12:19.951 13475.676 - 13580.954: 97.0570% ( 12) 00:12:19.951 13580.954 - 13686.233: 97.1481% ( 12) 00:12:19.951 13686.233 - 13791.512: 97.2467% ( 13) 00:12:19.951 13791.512 - 13896.790: 97.3756% ( 17) 00:12:19.951 13896.790 - 14002.069: 97.4894% ( 15) 00:12:19.951 14002.069 - 14107.348: 97.6183% ( 17) 00:12:19.951 14107.348 - 14212.627: 97.7473% ( 17) 00:12:19.951 14212.627 - 14317.905: 97.9066% ( 21) 00:12:19.951 14317.905 - 14423.184: 98.0203% ( 15) 00:12:19.951 14423.184 - 14528.463: 98.1569% ( 18) 00:12:19.951 14528.463 - 14633.741: 98.2858% ( 17) 00:12:19.951 14633.741 - 14739.020: 98.3844% ( 13) 00:12:19.951 14739.020 - 14844.299: 98.4678% ( 11) 00:12:19.951 14844.299 - 14949.578: 98.5740% ( 14) 00:12:19.951 14949.578 - 15054.856: 98.6650% ( 12) 00:12:19.951 15054.856 - 15160.135: 98.7637% ( 13) 00:12:19.951 15160.135 - 15265.414: 98.8623% ( 13) 00:12:19.951 15265.414 - 15370.692: 98.9684% ( 14) 00:12:19.951 15370.692 - 15475.971: 99.0291% ( 8) 00:12:19.951 34531.418 - 34741.976: 99.0443% ( 2) 00:12:19.951 34741.976 - 34952.533: 99.0974% ( 7) 00:12:19.951 34952.533 - 35163.091: 99.1429% ( 6) 00:12:19.951 35163.091 - 35373.648: 99.1884% ( 6) 00:12:19.951 35373.648 - 35584.206: 99.2339% ( 6) 00:12:19.951 35584.206 - 35794.763: 99.2794% ( 6) 00:12:19.951 35794.763 - 36005.320: 99.3249% ( 6) 00:12:19.951 36005.320 - 36215.878: 99.3704% ( 6) 00:12:19.951 36215.878 - 36426.435: 99.4160% ( 6) 00:12:19.951 36426.435 - 36636.993: 99.4615% ( 6) 00:12:19.951 36636.993 - 36847.550: 99.5070% ( 6) 00:12:19.951 36847.550 - 37058.108: 99.5146% ( 1) 00:12:19.952 42743.158 - 42953.716: 99.5601% ( 6) 00:12:19.952 42953.716 - 43164.273: 99.6056% ( 6) 00:12:19.952 43164.273 - 43374.831: 99.6435% ( 5) 00:12:19.952 43374.831 - 43585.388: 99.6890% ( 6) 00:12:19.952 43585.388 - 43795.945: 99.7345% ( 6) 00:12:19.952 43795.945 - 44006.503: 99.7800% ( 6) 00:12:19.952 44006.503 - 44217.060: 99.8255% ( 6) 00:12:19.952 44217.060 - 44427.618: 99.8711% ( 6) 00:12:19.952 44427.618 - 44638.175: 99.9166% ( 6) 00:12:19.952 44638.175 - 44848.733: 99.9621% ( 6) 00:12:19.952 44848.733 - 45059.290: 100.0000% ( 5) 00:12:19.952 00:12:19.952 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:19.952 ============================================================================== 00:12:19.952 Range in us Cumulative IO count 00:12:19.952 8001.182 - 8053.822: 0.0379% ( 5) 00:12:19.952 8053.822 - 8106.461: 0.1517% ( 15) 00:12:19.952 8106.461 - 8159.100: 0.4627% ( 41) 00:12:19.952 8159.100 - 8211.740: 1.2970% ( 110) 00:12:19.952 8211.740 - 8264.379: 2.5865% ( 170) 00:12:19.952 8264.379 - 8317.018: 4.1490% ( 206) 00:12:19.952 8317.018 - 8369.658: 6.2955% ( 283) 00:12:19.952 8369.658 - 8422.297: 8.9351% ( 348) 00:12:19.952 8422.297 - 8474.937: 12.0070% ( 405) 00:12:19.952 8474.937 - 8527.576: 15.2837% ( 432) 00:12:19.952 8527.576 - 8580.215: 18.9017% ( 477) 00:12:19.952 8580.215 - 8632.855: 22.8307% ( 518) 00:12:19.952 8632.855 - 8685.494: 26.9721% ( 546) 00:12:19.952 8685.494 - 8738.133: 31.0907% ( 543) 00:12:19.952 8738.133 - 8790.773: 35.4066% ( 569) 00:12:19.952 8790.773 - 8843.412: 39.7376% ( 571) 00:12:19.952 8843.412 - 8896.051: 44.0989% ( 575) 00:12:19.952 8896.051 - 8948.691: 48.4754% ( 577) 00:12:19.952 8948.691 - 9001.330: 52.6927% ( 556) 00:12:19.952 9001.330 - 9053.969: 56.5534% ( 509) 00:12:19.952 9053.969 - 9106.609: 60.0956% ( 467) 00:12:19.952 9106.609 - 9159.248: 63.2206% ( 412) 00:12:19.952 9159.248 - 9211.888: 65.9360% ( 358) 00:12:19.952 9211.888 - 9264.527: 68.2039% ( 299) 00:12:19.952 9264.527 - 9317.166: 70.1380% ( 255) 00:12:19.952 9317.166 - 9369.806: 71.8067% ( 220) 00:12:19.952 9369.806 - 9422.445: 73.1493% ( 177) 00:12:19.952 9422.445 - 9475.084: 74.3098% ( 153) 00:12:19.952 9475.084 - 9527.724: 75.2275% ( 121) 00:12:19.952 9527.724 - 9580.363: 76.0088% ( 103) 00:12:19.952 9580.363 - 9633.002: 76.7142% ( 93) 00:12:19.952 9633.002 - 9685.642: 77.3286% ( 81) 00:12:19.952 9685.642 - 9738.281: 77.9202% ( 78) 00:12:19.952 9738.281 - 9790.920: 78.4436% ( 69) 00:12:19.952 9790.920 - 9843.560: 78.9669% ( 69) 00:12:19.952 9843.560 - 9896.199: 79.4220% ( 60) 00:12:19.952 9896.199 - 9948.839: 79.8392% ( 55) 00:12:19.952 9948.839 - 10001.478: 80.2109% ( 49) 00:12:19.952 10001.478 - 10054.117: 80.5825% ( 49) 00:12:19.952 10054.117 - 10106.757: 80.9163% ( 44) 00:12:19.952 10106.757 - 10159.396: 81.2348% ( 42) 00:12:19.952 10159.396 - 10212.035: 81.5458% ( 41) 00:12:19.952 10212.035 - 10264.675: 81.8871% ( 45) 00:12:19.952 10264.675 - 10317.314: 82.2512% ( 48) 00:12:19.952 10317.314 - 10369.953: 82.6229% ( 49) 00:12:19.952 10369.953 - 10422.593: 82.9566% ( 44) 00:12:19.952 10422.593 - 10475.232: 83.2600% ( 40) 00:12:19.952 10475.232 - 10527.871: 83.5407% ( 37) 00:12:19.952 10527.871 - 10580.511: 83.7682% ( 30) 00:12:19.952 10580.511 - 10633.150: 83.9882% ( 29) 00:12:19.952 10633.150 - 10685.790: 84.2081% ( 29) 00:12:19.952 10685.790 - 10738.429: 84.4584% ( 33) 00:12:19.952 10738.429 - 10791.068: 84.7087% ( 33) 00:12:19.952 10791.068 - 10843.708: 84.9818% ( 36) 00:12:19.952 10843.708 - 10896.347: 85.2700% ( 38) 00:12:19.952 10896.347 - 10948.986: 85.5962% ( 43) 00:12:19.952 10948.986 - 11001.626: 85.9147% ( 42) 00:12:19.952 11001.626 - 11054.265: 86.2485% ( 44) 00:12:19.952 11054.265 - 11106.904: 86.5974% ( 46) 00:12:19.952 11106.904 - 11159.544: 86.9766% ( 50) 00:12:19.952 11159.544 - 11212.183: 87.4166% ( 58) 00:12:19.952 11212.183 - 11264.822: 87.8110% ( 52) 00:12:19.952 11264.822 - 11317.462: 88.2282% ( 55) 00:12:19.952 11317.462 - 11370.101: 88.6453% ( 55) 00:12:19.952 11370.101 - 11422.741: 89.0777% ( 57) 00:12:19.952 11422.741 - 11475.380: 89.4948% ( 55) 00:12:19.952 11475.380 - 11528.019: 89.8893% ( 52) 00:12:19.952 11528.019 - 11580.659: 90.3292% ( 58) 00:12:19.952 11580.659 - 11633.298: 90.7691% ( 58) 00:12:19.952 11633.298 - 11685.937: 91.1787% ( 54) 00:12:19.952 11685.937 - 11738.577: 91.6262% ( 59) 00:12:19.952 11738.577 - 11791.216: 92.0130% ( 51) 00:12:19.952 11791.216 - 11843.855: 92.4075% ( 52) 00:12:19.952 11843.855 - 11896.495: 92.8095% ( 53) 00:12:19.952 11896.495 - 11949.134: 93.2115% ( 53) 00:12:19.952 11949.134 - 12001.773: 93.5680% ( 47) 00:12:19.952 12001.773 - 12054.413: 93.9169% ( 46) 00:12:19.952 12054.413 - 12107.052: 94.2430% ( 43) 00:12:19.952 12107.052 - 12159.692: 94.5692% ( 43) 00:12:19.952 12159.692 - 12212.331: 94.8195% ( 33) 00:12:19.952 12212.331 - 12264.970: 94.9939% ( 23) 00:12:19.952 12264.970 - 12317.610: 95.1305% ( 18) 00:12:19.952 12317.610 - 12370.249: 95.2897% ( 21) 00:12:19.952 12370.249 - 12422.888: 95.4035% ( 15) 00:12:19.952 12422.888 - 12475.528: 95.5249% ( 16) 00:12:19.952 12475.528 - 12528.167: 95.6462% ( 16) 00:12:19.952 12528.167 - 12580.806: 95.7524% ( 14) 00:12:19.952 12580.806 - 12633.446: 95.8662% ( 15) 00:12:19.952 12633.446 - 12686.085: 95.9648% ( 13) 00:12:19.952 12686.085 - 12738.724: 96.0331% ( 9) 00:12:19.952 12738.724 - 12791.364: 96.1013% ( 9) 00:12:19.952 12791.364 - 12844.003: 96.1696% ( 9) 00:12:19.952 12844.003 - 12896.643: 96.2151% ( 6) 00:12:19.952 12896.643 - 12949.282: 96.2379% ( 3) 00:12:19.952 12949.282 - 13001.921: 96.2834% ( 6) 00:12:19.952 13001.921 - 13054.561: 96.3365% ( 7) 00:12:19.952 13054.561 - 13107.200: 96.3896% ( 7) 00:12:19.952 13107.200 - 13159.839: 96.4502% ( 8) 00:12:19.952 13159.839 - 13212.479: 96.5261% ( 10) 00:12:19.952 13212.479 - 13265.118: 96.5792% ( 7) 00:12:19.952 13265.118 - 13317.757: 96.6323% ( 7) 00:12:19.952 13317.757 - 13370.397: 96.6702% ( 5) 00:12:19.952 13370.397 - 13423.036: 96.7233% ( 7) 00:12:19.952 13423.036 - 13475.676: 96.7840% ( 8) 00:12:19.952 13475.676 - 13580.954: 96.9129% ( 17) 00:12:19.952 13580.954 - 13686.233: 97.0798% ( 22) 00:12:19.952 13686.233 - 13791.512: 97.2315% ( 20) 00:12:19.953 13791.512 - 13896.790: 97.3529% ( 16) 00:12:19.953 13896.790 - 14002.069: 97.4894% ( 18) 00:12:19.953 14002.069 - 14107.348: 97.6335% ( 19) 00:12:19.953 14107.348 - 14212.627: 97.7852% ( 20) 00:12:19.953 14212.627 - 14317.905: 97.9293% ( 19) 00:12:19.953 14317.905 - 14423.184: 98.0810% ( 20) 00:12:19.953 14423.184 - 14528.463: 98.2175% ( 18) 00:12:19.953 14528.463 - 14633.741: 98.3389% ( 16) 00:12:19.953 14633.741 - 14739.020: 98.4527% ( 15) 00:12:19.953 14739.020 - 14844.299: 98.5589% ( 14) 00:12:19.953 14844.299 - 14949.578: 98.6650% ( 14) 00:12:19.953 14949.578 - 15054.856: 98.7561% ( 12) 00:12:19.953 15054.856 - 15160.135: 98.8319% ( 10) 00:12:19.953 15160.135 - 15265.414: 98.8850% ( 7) 00:12:19.953 15265.414 - 15370.692: 98.9457% ( 8) 00:12:19.953 15370.692 - 15475.971: 98.9836% ( 5) 00:12:19.953 15475.971 - 15581.250: 99.0215% ( 5) 00:12:19.953 15581.250 - 15686.529: 99.0291% ( 1) 00:12:19.953 32636.402 - 32846.959: 99.0671% ( 5) 00:12:19.953 32846.959 - 33057.516: 99.1126% ( 6) 00:12:19.953 33057.516 - 33268.074: 99.1581% ( 6) 00:12:19.953 33268.074 - 33478.631: 99.2036% ( 6) 00:12:19.953 33478.631 - 33689.189: 99.2415% ( 5) 00:12:19.953 33689.189 - 33899.746: 99.2794% ( 5) 00:12:19.953 33899.746 - 34110.304: 99.3325% ( 7) 00:12:19.953 34110.304 - 34320.861: 99.3704% ( 5) 00:12:19.953 34320.861 - 34531.418: 99.4160% ( 6) 00:12:19.953 34531.418 - 34741.976: 99.4615% ( 6) 00:12:19.953 34741.976 - 34952.533: 99.4994% ( 5) 00:12:19.953 34952.533 - 35163.091: 99.5146% ( 2) 00:12:19.953 40637.584 - 40848.141: 99.5449% ( 4) 00:12:19.953 40848.141 - 41058.699: 99.5828% ( 5) 00:12:19.953 41058.699 - 41269.256: 99.6283% ( 6) 00:12:19.953 41269.256 - 41479.814: 99.6663% ( 5) 00:12:19.953 41479.814 - 41690.371: 99.7118% ( 6) 00:12:19.953 41690.371 - 41900.929: 99.7573% ( 6) 00:12:19.953 41900.929 - 42111.486: 99.8028% ( 6) 00:12:19.953 42111.486 - 42322.043: 99.8483% ( 6) 00:12:19.953 42322.043 - 42532.601: 99.8938% ( 6) 00:12:19.953 42532.601 - 42743.158: 99.9393% ( 6) 00:12:19.953 42743.158 - 42953.716: 99.9848% ( 6) 00:12:19.953 42953.716 - 43164.273: 100.0000% ( 2) 00:12:19.953 00:12:19.953 10:17:27 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:21.334 Initializing NVMe Controllers 00:12:21.334 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:21.334 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:21.334 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:21.334 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:21.334 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:21.334 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:21.334 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:21.334 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:21.334 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:21.334 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:21.334 Initialization complete. Launching workers. 00:12:21.334 ======================================================== 00:12:21.334 Latency(us) 00:12:21.334 Device Information : IOPS MiB/s Average min max 00:12:21.334 PCIE (0000:00:10.0) NSID 1 from core 0: 9244.29 108.33 13891.58 8395.85 44882.83 00:12:21.334 PCIE (0000:00:11.0) NSID 1 from core 0: 9244.29 108.33 13873.71 8639.27 42965.47 00:12:21.334 PCIE (0000:00:13.0) NSID 1 from core 0: 9244.29 108.33 13855.40 8732.56 41972.83 00:12:21.334 PCIE (0000:00:12.0) NSID 1 from core 0: 9244.29 108.33 13836.83 8782.68 40011.37 00:12:21.334 PCIE (0000:00:12.0) NSID 2 from core 0: 9244.29 108.33 13818.71 8385.77 38414.72 00:12:21.334 PCIE (0000:00:12.0) NSID 3 from core 0: 9308.04 109.08 13706.69 8372.25 30835.22 00:12:21.334 ======================================================== 00:12:21.334 Total : 55529.49 650.74 13830.34 8372.25 44882.83 00:12:21.334 00:12:21.334 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:21.334 ================================================================================= 00:12:21.334 1.00000% : 8896.051us 00:12:21.334 10.00000% : 9948.839us 00:12:21.334 25.00000% : 11633.298us 00:12:21.334 50.00000% : 13370.397us 00:12:21.334 75.00000% : 15265.414us 00:12:21.334 90.00000% : 17792.103us 00:12:21.334 95.00000% : 18634.333us 00:12:21.334 98.00000% : 20213.513us 00:12:21.334 99.00000% : 35584.206us 00:12:21.334 99.50000% : 43585.388us 00:12:21.334 99.90000% : 44638.175us 00:12:21.334 99.99000% : 45059.290us 00:12:21.334 99.99900% : 45059.290us 00:12:21.334 99.99990% : 45059.290us 00:12:21.334 99.99999% : 45059.290us 00:12:21.334 00:12:21.334 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:21.334 ================================================================================= 00:12:21.334 1.00000% : 9106.609us 00:12:21.334 10.00000% : 10054.117us 00:12:21.334 25.00000% : 11528.019us 00:12:21.334 50.00000% : 13370.397us 00:12:21.334 75.00000% : 15160.135us 00:12:21.334 90.00000% : 17686.824us 00:12:21.334 95.00000% : 18950.169us 00:12:21.334 98.00000% : 20213.513us 00:12:21.334 99.00000% : 34110.304us 00:12:21.334 99.50000% : 41900.929us 00:12:21.334 99.90000% : 42743.158us 00:12:21.334 99.99000% : 43164.273us 00:12:21.334 99.99900% : 43164.273us 00:12:21.334 99.99990% : 43164.273us 00:12:21.334 99.99999% : 43164.273us 00:12:21.334 00:12:21.334 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:21.334 ================================================================================= 00:12:21.334 1.00000% : 9106.609us 00:12:21.334 10.00000% : 9896.199us 00:12:21.334 25.00000% : 11633.298us 00:12:21.334 50.00000% : 13475.676us 00:12:21.334 75.00000% : 15370.692us 00:12:21.334 90.00000% : 17476.267us 00:12:21.334 95.00000% : 18634.333us 00:12:21.334 98.00000% : 19792.398us 00:12:21.334 99.00000% : 33478.631us 00:12:21.334 99.50000% : 40848.141us 00:12:21.334 99.90000% : 41900.929us 00:12:21.334 99.99000% : 42111.486us 00:12:21.334 99.99900% : 42111.486us 00:12:21.334 99.99990% : 42111.486us 00:12:21.334 99.99999% : 42111.486us 00:12:21.334 00:12:21.334 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:21.334 ================================================================================= 00:12:21.334 1.00000% : 9159.248us 00:12:21.334 10.00000% : 9948.839us 00:12:21.334 25.00000% : 11580.659us 00:12:21.334 50.00000% : 13423.036us 00:12:21.334 75.00000% : 15265.414us 00:12:21.334 90.00000% : 17476.267us 00:12:21.334 95.00000% : 18529.054us 00:12:21.334 98.00000% : 19897.677us 00:12:21.334 99.00000% : 31794.172us 00:12:21.334 99.50000% : 38953.124us 00:12:21.334 99.90000% : 39795.354us 00:12:21.334 99.99000% : 40216.469us 00:12:21.334 99.99900% : 40216.469us 00:12:21.335 99.99990% : 40216.469us 00:12:21.335 99.99999% : 40216.469us 00:12:21.335 00:12:21.335 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:21.335 ================================================================================= 00:12:21.335 1.00000% : 9053.969us 00:12:21.335 10.00000% : 10001.478us 00:12:21.335 25.00000% : 11528.019us 00:12:21.335 50.00000% : 13423.036us 00:12:21.335 75.00000% : 15370.692us 00:12:21.335 90.00000% : 17581.545us 00:12:21.335 95.00000% : 18423.775us 00:12:21.335 98.00000% : 19897.677us 00:12:21.335 99.00000% : 30530.827us 00:12:21.335 99.50000% : 37268.665us 00:12:21.335 99.90000% : 38321.452us 00:12:21.335 99.99000% : 38532.010us 00:12:21.335 99.99900% : 38532.010us 00:12:21.335 99.99990% : 38532.010us 00:12:21.335 99.99999% : 38532.010us 00:12:21.335 00:12:21.335 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:21.335 ================================================================================= 00:12:21.335 1.00000% : 9053.969us 00:12:21.335 10.00000% : 9896.199us 00:12:21.335 25.00000% : 11580.659us 00:12:21.335 50.00000% : 13423.036us 00:12:21.335 75.00000% : 15265.414us 00:12:21.335 90.00000% : 17686.824us 00:12:21.335 95.00000% : 18318.496us 00:12:21.335 98.00000% : 20002.956us 00:12:21.335 99.00000% : 22213.809us 00:12:21.335 99.50000% : 29688.598us 00:12:21.335 99.90000% : 30741.385us 00:12:21.335 99.99000% : 30951.942us 00:12:21.335 99.99900% : 30951.942us 00:12:21.335 99.99990% : 30951.942us 00:12:21.335 99.99999% : 30951.942us 00:12:21.335 00:12:21.335 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:21.335 ============================================================================== 00:12:21.335 Range in us Cumulative IO count 00:12:21.335 8369.658 - 8422.297: 0.0323% ( 3) 00:12:21.335 8422.297 - 8474.937: 0.0970% ( 6) 00:12:21.335 8474.937 - 8527.576: 0.2155% ( 11) 00:12:21.335 8527.576 - 8580.215: 0.2263% ( 1) 00:12:21.335 8580.215 - 8632.855: 0.2478% ( 2) 00:12:21.335 8632.855 - 8685.494: 0.3448% ( 9) 00:12:21.335 8685.494 - 8738.133: 0.4957% ( 14) 00:12:21.335 8738.133 - 8790.773: 0.6466% ( 14) 00:12:21.335 8790.773 - 8843.412: 0.8190% ( 16) 00:12:21.335 8843.412 - 8896.051: 1.0345% ( 20) 00:12:21.335 8896.051 - 8948.691: 1.3362% ( 28) 00:12:21.335 8948.691 - 9001.330: 1.4224% ( 8) 00:12:21.335 9001.330 - 9053.969: 1.5517% ( 12) 00:12:21.335 9053.969 - 9106.609: 1.7134% ( 15) 00:12:21.335 9106.609 - 9159.248: 1.9828% ( 25) 00:12:21.335 9159.248 - 9211.888: 2.2845% ( 28) 00:12:21.335 9211.888 - 9264.527: 2.7478% ( 43) 00:12:21.335 9264.527 - 9317.166: 3.1250% ( 35) 00:12:21.335 9317.166 - 9369.806: 3.5668% ( 41) 00:12:21.335 9369.806 - 9422.445: 4.3642% ( 74) 00:12:21.335 9422.445 - 9475.084: 5.6358% ( 118) 00:12:21.335 9475.084 - 9527.724: 6.4978% ( 80) 00:12:21.335 9527.724 - 9580.363: 6.8534% ( 33) 00:12:21.335 9580.363 - 9633.002: 7.1336% ( 26) 00:12:21.335 9633.002 - 9685.642: 7.6509% ( 48) 00:12:21.335 9685.642 - 9738.281: 8.1897% ( 50) 00:12:21.335 9738.281 - 9790.920: 8.6746% ( 45) 00:12:21.335 9790.920 - 9843.560: 9.4181% ( 69) 00:12:21.335 9843.560 - 9896.199: 9.8168% ( 37) 00:12:21.335 9896.199 - 9948.839: 10.3125% ( 46) 00:12:21.335 9948.839 - 10001.478: 10.7543% ( 41) 00:12:21.335 10001.478 - 10054.117: 11.2716% ( 48) 00:12:21.335 10054.117 - 10106.757: 11.6595% ( 36) 00:12:21.335 10106.757 - 10159.396: 12.1444% ( 45) 00:12:21.335 10159.396 - 10212.035: 12.5647% ( 39) 00:12:21.335 10212.035 - 10264.675: 12.9418% ( 35) 00:12:21.335 10264.675 - 10317.314: 13.5453% ( 56) 00:12:21.335 10317.314 - 10369.953: 14.1272% ( 54) 00:12:21.335 10369.953 - 10422.593: 14.4935% ( 34) 00:12:21.335 10422.593 - 10475.232: 14.7629% ( 25) 00:12:21.335 10475.232 - 10527.871: 15.0431% ( 26) 00:12:21.335 10527.871 - 10580.511: 15.2694% ( 21) 00:12:21.335 10580.511 - 10633.150: 15.8190% ( 51) 00:12:21.335 10633.150 - 10685.790: 16.4440% ( 58) 00:12:21.335 10685.790 - 10738.429: 17.0043% ( 52) 00:12:21.335 10738.429 - 10791.068: 17.6078% ( 56) 00:12:21.335 10791.068 - 10843.708: 18.1142% ( 47) 00:12:21.335 10843.708 - 10896.347: 18.7177% ( 56) 00:12:21.335 10896.347 - 10948.986: 19.1703% ( 42) 00:12:21.335 10948.986 - 11001.626: 19.7629% ( 55) 00:12:21.335 11001.626 - 11054.265: 20.1940% ( 40) 00:12:21.335 11054.265 - 11106.904: 20.7112% ( 48) 00:12:21.335 11106.904 - 11159.544: 21.3254% ( 57) 00:12:21.335 11159.544 - 11212.183: 21.8642% ( 50) 00:12:21.335 11212.183 - 11264.822: 22.3815% ( 48) 00:12:21.335 11264.822 - 11317.462: 22.9310% ( 51) 00:12:21.335 11317.462 - 11370.101: 23.4483% ( 48) 00:12:21.335 11370.101 - 11422.741: 23.8254% ( 35) 00:12:21.335 11422.741 - 11475.380: 24.2672% ( 41) 00:12:21.335 11475.380 - 11528.019: 24.6228% ( 33) 00:12:21.335 11528.019 - 11580.659: 24.9461% ( 30) 00:12:21.335 11580.659 - 11633.298: 25.3448% ( 37) 00:12:21.335 11633.298 - 11685.937: 25.6034% ( 24) 00:12:21.335 11685.937 - 11738.577: 25.9591% ( 33) 00:12:21.335 11738.577 - 11791.216: 26.3578% ( 37) 00:12:21.335 11791.216 - 11843.855: 26.7565% ( 37) 00:12:21.335 11843.855 - 11896.495: 27.0474% ( 27) 00:12:21.335 11896.495 - 11949.134: 27.3384% ( 27) 00:12:21.335 11949.134 - 12001.773: 27.5970% ( 24) 00:12:21.335 12001.773 - 12054.413: 27.8233% ( 21) 00:12:21.335 12054.413 - 12107.052: 28.0819% ( 24) 00:12:21.335 12107.052 - 12159.692: 28.3728% ( 27) 00:12:21.335 12159.692 - 12212.331: 28.7823% ( 38) 00:12:21.335 12212.331 - 12264.970: 29.0841% ( 28) 00:12:21.335 12264.970 - 12317.610: 29.2780% ( 18) 00:12:21.335 12317.610 - 12370.249: 29.5151% ( 22) 00:12:21.335 12370.249 - 12422.888: 29.8384% ( 30) 00:12:21.335 12422.888 - 12475.528: 30.2802% ( 41) 00:12:21.335 12475.528 - 12528.167: 30.6897% ( 38) 00:12:21.335 12528.167 - 12580.806: 31.2823% ( 55) 00:12:21.335 12580.806 - 12633.446: 32.0366% ( 70) 00:12:21.335 12633.446 - 12686.085: 32.9095% ( 81) 00:12:21.335 12686.085 - 12738.724: 33.9009% ( 92) 00:12:21.335 12738.724 - 12791.364: 34.9030% ( 93) 00:12:21.335 12791.364 - 12844.003: 35.8621% ( 89) 00:12:21.335 12844.003 - 12896.643: 37.4677% ( 149) 00:12:21.335 12896.643 - 12949.282: 38.9116% ( 134) 00:12:21.335 12949.282 - 13001.921: 40.3017% ( 129) 00:12:21.335 13001.921 - 13054.561: 41.8534% ( 144) 00:12:21.335 13054.561 - 13107.200: 43.2435% ( 129) 00:12:21.335 13107.200 - 13159.839: 44.7953% ( 144) 00:12:21.335 13159.839 - 13212.479: 46.0129% ( 113) 00:12:21.335 13212.479 - 13265.118: 47.3491% ( 124) 00:12:21.335 13265.118 - 13317.757: 48.6099% ( 117) 00:12:21.335 13317.757 - 13370.397: 50.3125% ( 158) 00:12:21.335 13370.397 - 13423.036: 52.0797% ( 164) 00:12:21.335 13423.036 - 13475.676: 53.5453% ( 136) 00:12:21.335 13475.676 - 13580.954: 56.0345% ( 231) 00:12:21.335 13580.954 - 13686.233: 57.9957% ( 182) 00:12:21.335 13686.233 - 13791.512: 59.8599% ( 173) 00:12:21.335 13791.512 - 13896.790: 61.5517% ( 157) 00:12:21.335 13896.790 - 14002.069: 63.1466% ( 148) 00:12:21.335 14002.069 - 14107.348: 64.6336% ( 138) 00:12:21.335 14107.348 - 14212.627: 65.8513% ( 113) 00:12:21.335 14212.627 - 14317.905: 67.1121% ( 117) 00:12:21.335 14317.905 - 14423.184: 68.2112% ( 102) 00:12:21.335 14423.184 - 14528.463: 69.0625% ( 79) 00:12:21.335 14528.463 - 14633.741: 70.1078% ( 97) 00:12:21.335 14633.741 - 14739.020: 71.0022% ( 83) 00:12:21.335 14739.020 - 14844.299: 72.1336% ( 105) 00:12:21.335 14844.299 - 14949.578: 73.0172% ( 82) 00:12:21.335 14949.578 - 15054.856: 73.7500% ( 68) 00:12:21.335 15054.856 - 15160.135: 74.5151% ( 71) 00:12:21.335 15160.135 - 15265.414: 75.2047% ( 64) 00:12:21.335 15265.414 - 15370.692: 76.0022% ( 74) 00:12:21.335 15370.692 - 15475.971: 76.9073% ( 84) 00:12:21.335 15475.971 - 15581.250: 77.7371% ( 77) 00:12:21.335 15581.250 - 15686.529: 78.5129% ( 72) 00:12:21.335 15686.529 - 15791.807: 79.1272% ( 57) 00:12:21.335 15791.807 - 15897.086: 79.7414% ( 57) 00:12:21.335 15897.086 - 16002.365: 80.2586% ( 48) 00:12:21.335 16002.365 - 16107.643: 80.7112% ( 42) 00:12:21.335 16107.643 - 16212.922: 81.2284% ( 48) 00:12:21.335 16212.922 - 16318.201: 81.6918% ( 43) 00:12:21.335 16318.201 - 16423.480: 82.2737% ( 54) 00:12:21.335 16423.480 - 16528.758: 82.7694% ( 46) 00:12:21.335 16528.758 - 16634.037: 83.2543% ( 45) 00:12:21.335 16634.037 - 16739.316: 83.7716% ( 48) 00:12:21.335 16739.316 - 16844.594: 84.4073% ( 59) 00:12:21.335 16844.594 - 16949.873: 84.9892% ( 54) 00:12:21.335 16949.873 - 17055.152: 85.7435% ( 70) 00:12:21.335 17055.152 - 17160.431: 86.6703% ( 86) 00:12:21.335 17160.431 - 17265.709: 87.3815% ( 66) 00:12:21.335 17265.709 - 17370.988: 87.9634% ( 54) 00:12:21.335 17370.988 - 17476.267: 88.5345% ( 53) 00:12:21.335 17476.267 - 17581.545: 89.1595% ( 58) 00:12:21.335 17581.545 - 17686.824: 89.8060% ( 60) 00:12:21.335 17686.824 - 17792.103: 90.4203% ( 57) 00:12:21.335 17792.103 - 17897.382: 91.0776% ( 61) 00:12:21.335 17897.382 - 18002.660: 91.7888% ( 66) 00:12:21.335 18002.660 - 18107.939: 92.4353% ( 60) 00:12:21.335 18107.939 - 18213.218: 92.9526% ( 48) 00:12:21.335 18213.218 - 18318.496: 93.5237% ( 53) 00:12:21.335 18318.496 - 18423.775: 94.1056% ( 54) 00:12:21.335 18423.775 - 18529.054: 94.6444% ( 50) 00:12:21.335 18529.054 - 18634.333: 95.1724% ( 49) 00:12:21.335 18634.333 - 18739.611: 95.6142% ( 41) 00:12:21.335 18739.611 - 18844.890: 95.9698% ( 33) 00:12:21.335 18844.890 - 18950.169: 96.2931% ( 30) 00:12:21.335 18950.169 - 19055.447: 96.4655% ( 16) 00:12:21.335 19055.447 - 19160.726: 96.6810% ( 20) 00:12:21.335 19160.726 - 19266.005: 96.8427% ( 15) 00:12:21.335 19266.005 - 19371.284: 97.0474% ( 19) 00:12:21.335 19371.284 - 19476.562: 97.1983% ( 14) 00:12:21.335 19476.562 - 19581.841: 97.3168% ( 11) 00:12:21.335 19581.841 - 19687.120: 97.4569% ( 13) 00:12:21.336 19687.120 - 19792.398: 97.5970% ( 13) 00:12:21.336 19792.398 - 19897.677: 97.7263% ( 12) 00:12:21.336 19897.677 - 20002.956: 97.8341% ( 10) 00:12:21.336 20002.956 - 20108.235: 97.9418% ( 10) 00:12:21.336 20108.235 - 20213.513: 98.0388% ( 9) 00:12:21.336 20213.513 - 20318.792: 98.1250% ( 8) 00:12:21.336 20318.792 - 20424.071: 98.2004% ( 7) 00:12:21.336 20424.071 - 20529.349: 98.2435% ( 4) 00:12:21.336 20529.349 - 20634.628: 98.2866% ( 4) 00:12:21.336 20634.628 - 20739.907: 98.3297% ( 4) 00:12:21.336 20739.907 - 20845.186: 98.3836% ( 5) 00:12:21.336 20845.186 - 20950.464: 98.4159% ( 3) 00:12:21.336 20950.464 - 21055.743: 98.4591% ( 4) 00:12:21.336 21055.743 - 21161.022: 98.5022% ( 4) 00:12:21.336 21161.022 - 21266.300: 98.5453% ( 4) 00:12:21.336 21266.300 - 21371.579: 98.5776% ( 3) 00:12:21.336 21371.579 - 21476.858: 98.6207% ( 4) 00:12:21.336 34320.861 - 34531.418: 98.6422% ( 2) 00:12:21.336 34531.418 - 34741.976: 98.7177% ( 7) 00:12:21.336 34741.976 - 34952.533: 98.7931% ( 7) 00:12:21.336 34952.533 - 35163.091: 98.8793% ( 8) 00:12:21.336 35163.091 - 35373.648: 98.9547% ( 7) 00:12:21.336 35373.648 - 35584.206: 99.0302% ( 7) 00:12:21.336 35584.206 - 35794.763: 99.1272% ( 9) 00:12:21.336 35794.763 - 36005.320: 99.2026% ( 7) 00:12:21.336 36005.320 - 36215.878: 99.2780% ( 7) 00:12:21.336 36215.878 - 36426.435: 99.3103% ( 3) 00:12:21.336 42953.716 - 43164.273: 99.3750% ( 6) 00:12:21.336 43164.273 - 43374.831: 99.4612% ( 8) 00:12:21.336 43374.831 - 43585.388: 99.5259% ( 6) 00:12:21.336 43585.388 - 43795.945: 99.6121% ( 8) 00:12:21.336 43795.945 - 44006.503: 99.6767% ( 6) 00:12:21.336 44006.503 - 44217.060: 99.7522% ( 7) 00:12:21.336 44217.060 - 44427.618: 99.8384% ( 8) 00:12:21.336 44427.618 - 44638.175: 99.9138% ( 7) 00:12:21.336 44638.175 - 44848.733: 99.9892% ( 7) 00:12:21.336 44848.733 - 45059.290: 100.0000% ( 1) 00:12:21.336 00:12:21.336 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:21.336 ============================================================================== 00:12:21.336 Range in us Cumulative IO count 00:12:21.336 8632.855 - 8685.494: 0.0431% ( 4) 00:12:21.336 8685.494 - 8738.133: 0.1293% ( 8) 00:12:21.336 8738.133 - 8790.773: 0.2371% ( 10) 00:12:21.336 8790.773 - 8843.412: 0.5280% ( 27) 00:12:21.336 8843.412 - 8896.051: 0.5927% ( 6) 00:12:21.336 8896.051 - 8948.691: 0.6681% ( 7) 00:12:21.336 8948.691 - 9001.330: 0.7220% ( 5) 00:12:21.336 9001.330 - 9053.969: 0.8836% ( 15) 00:12:21.336 9053.969 - 9106.609: 1.0991% ( 20) 00:12:21.336 9106.609 - 9159.248: 1.5733% ( 44) 00:12:21.336 9159.248 - 9211.888: 2.1228% ( 51) 00:12:21.336 9211.888 - 9264.527: 2.7802% ( 61) 00:12:21.336 9264.527 - 9317.166: 3.3297% ( 51) 00:12:21.336 9317.166 - 9369.806: 3.8254% ( 46) 00:12:21.336 9369.806 - 9422.445: 4.5797% ( 70) 00:12:21.336 9422.445 - 9475.084: 5.1832% ( 56) 00:12:21.336 9475.084 - 9527.724: 5.7759% ( 55) 00:12:21.336 9527.724 - 9580.363: 6.4871% ( 66) 00:12:21.336 9580.363 - 9633.002: 7.2737% ( 73) 00:12:21.336 9633.002 - 9685.642: 8.0496% ( 72) 00:12:21.336 9685.642 - 9738.281: 8.7284% ( 63) 00:12:21.336 9738.281 - 9790.920: 9.1272% ( 37) 00:12:21.336 9790.920 - 9843.560: 9.4504% ( 30) 00:12:21.336 9843.560 - 9896.199: 9.6552% ( 19) 00:12:21.336 9896.199 - 9948.839: 9.7953% ( 13) 00:12:21.336 9948.839 - 10001.478: 9.9569% ( 15) 00:12:21.336 10001.478 - 10054.117: 10.3448% ( 36) 00:12:21.336 10054.117 - 10106.757: 10.7543% ( 38) 00:12:21.336 10106.757 - 10159.396: 11.2716% ( 48) 00:12:21.336 10159.396 - 10212.035: 11.9720% ( 65) 00:12:21.336 10212.035 - 10264.675: 12.4461% ( 44) 00:12:21.336 10264.675 - 10317.314: 12.7155% ( 25) 00:12:21.336 10317.314 - 10369.953: 13.0819% ( 34) 00:12:21.336 10369.953 - 10422.593: 13.6099% ( 49) 00:12:21.336 10422.593 - 10475.232: 14.2349% ( 58) 00:12:21.336 10475.232 - 10527.871: 14.7737% ( 50) 00:12:21.336 10527.871 - 10580.511: 15.5496% ( 72) 00:12:21.336 10580.511 - 10633.150: 16.3039% ( 70) 00:12:21.336 10633.150 - 10685.790: 16.6379% ( 31) 00:12:21.336 10685.790 - 10738.429: 17.0582% ( 39) 00:12:21.336 10738.429 - 10791.068: 17.5647% ( 47) 00:12:21.336 10791.068 - 10843.708: 18.1789% ( 57) 00:12:21.336 10843.708 - 10896.347: 18.7500% ( 53) 00:12:21.336 10896.347 - 10948.986: 19.3319% ( 54) 00:12:21.336 10948.986 - 11001.626: 19.9784% ( 60) 00:12:21.336 11001.626 - 11054.265: 20.7220% ( 69) 00:12:21.336 11054.265 - 11106.904: 21.4116% ( 64) 00:12:21.336 11106.904 - 11159.544: 21.8319% ( 39) 00:12:21.336 11159.544 - 11212.183: 22.4461% ( 57) 00:12:21.336 11212.183 - 11264.822: 22.8556% ( 38) 00:12:21.336 11264.822 - 11317.462: 23.2651% ( 38) 00:12:21.336 11317.462 - 11370.101: 23.6853% ( 39) 00:12:21.336 11370.101 - 11422.741: 24.1918% ( 47) 00:12:21.336 11422.741 - 11475.380: 24.7091% ( 48) 00:12:21.336 11475.380 - 11528.019: 25.0647% ( 33) 00:12:21.336 11528.019 - 11580.659: 25.3556% ( 27) 00:12:21.336 11580.659 - 11633.298: 25.7004% ( 32) 00:12:21.336 11633.298 - 11685.937: 26.0453% ( 32) 00:12:21.336 11685.937 - 11738.577: 26.3793% ( 31) 00:12:21.336 11738.577 - 11791.216: 26.6703% ( 27) 00:12:21.336 11791.216 - 11843.855: 26.9828% ( 29) 00:12:21.336 11843.855 - 11896.495: 27.3168% ( 31) 00:12:21.336 11896.495 - 11949.134: 27.7155% ( 37) 00:12:21.336 11949.134 - 12001.773: 28.0280% ( 29) 00:12:21.336 12001.773 - 12054.413: 28.2543% ( 21) 00:12:21.336 12054.413 - 12107.052: 28.4052% ( 14) 00:12:21.336 12107.052 - 12159.692: 28.6422% ( 22) 00:12:21.336 12159.692 - 12212.331: 28.8685% ( 21) 00:12:21.336 12212.331 - 12264.970: 29.0948% ( 21) 00:12:21.336 12264.970 - 12317.610: 29.3427% ( 23) 00:12:21.336 12317.610 - 12370.249: 29.5366% ( 18) 00:12:21.336 12370.249 - 12422.888: 29.7845% ( 23) 00:12:21.336 12422.888 - 12475.528: 30.1185% ( 31) 00:12:21.336 12475.528 - 12528.167: 30.4418% ( 30) 00:12:21.336 12528.167 - 12580.806: 30.8297% ( 36) 00:12:21.336 12580.806 - 12633.446: 31.1961% ( 34) 00:12:21.336 12633.446 - 12686.085: 31.6595% ( 43) 00:12:21.336 12686.085 - 12738.724: 32.3491% ( 64) 00:12:21.336 12738.724 - 12791.364: 33.0711% ( 67) 00:12:21.336 12791.364 - 12844.003: 34.0086% ( 87) 00:12:21.336 12844.003 - 12896.643: 35.3772% ( 127) 00:12:21.336 12896.643 - 12949.282: 37.1121% ( 161) 00:12:21.336 12949.282 - 13001.921: 38.6638% ( 144) 00:12:21.336 13001.921 - 13054.561: 40.7866% ( 197) 00:12:21.336 13054.561 - 13107.200: 42.4353% ( 153) 00:12:21.336 13107.200 - 13159.839: 44.1487% ( 159) 00:12:21.336 13159.839 - 13212.479: 45.8297% ( 156) 00:12:21.336 13212.479 - 13265.118: 47.0366% ( 112) 00:12:21.336 13265.118 - 13317.757: 48.5022% ( 136) 00:12:21.336 13317.757 - 13370.397: 50.0216% ( 141) 00:12:21.336 13370.397 - 13423.036: 51.4116% ( 129) 00:12:21.336 13423.036 - 13475.676: 52.9957% ( 147) 00:12:21.336 13475.676 - 13580.954: 55.5603% ( 238) 00:12:21.336 13580.954 - 13686.233: 58.3082% ( 255) 00:12:21.336 13686.233 - 13791.512: 60.2909% ( 184) 00:12:21.336 13791.512 - 13896.790: 62.5970% ( 214) 00:12:21.336 13896.790 - 14002.069: 64.1487% ( 144) 00:12:21.336 14002.069 - 14107.348: 65.6358% ( 138) 00:12:21.336 14107.348 - 14212.627: 66.8534% ( 113) 00:12:21.336 14212.627 - 14317.905: 67.8556% ( 93) 00:12:21.336 14317.905 - 14423.184: 68.8147% ( 89) 00:12:21.336 14423.184 - 14528.463: 69.5582% ( 69) 00:12:21.336 14528.463 - 14633.741: 70.3017% ( 69) 00:12:21.336 14633.741 - 14739.020: 71.0991% ( 74) 00:12:21.336 14739.020 - 14844.299: 71.8642% ( 71) 00:12:21.336 14844.299 - 14949.578: 72.7263% ( 80) 00:12:21.336 14949.578 - 15054.856: 73.7392% ( 94) 00:12:21.336 15054.856 - 15160.135: 75.3772% ( 152) 00:12:21.336 15160.135 - 15265.414: 76.2177% ( 78) 00:12:21.336 15265.414 - 15370.692: 77.0905% ( 81) 00:12:21.336 15370.692 - 15475.971: 77.8664% ( 72) 00:12:21.336 15475.971 - 15581.250: 78.4159% ( 51) 00:12:21.336 15581.250 - 15686.529: 78.9009% ( 45) 00:12:21.336 15686.529 - 15791.807: 79.3427% ( 41) 00:12:21.336 15791.807 - 15897.086: 79.9677% ( 58) 00:12:21.336 15897.086 - 16002.365: 80.4526% ( 45) 00:12:21.336 16002.365 - 16107.643: 80.8728% ( 39) 00:12:21.336 16107.643 - 16212.922: 81.4009% ( 49) 00:12:21.336 16212.922 - 16318.201: 82.0151% ( 57) 00:12:21.336 16318.201 - 16423.480: 82.5754% ( 52) 00:12:21.336 16423.480 - 16528.758: 83.0819% ( 47) 00:12:21.336 16528.758 - 16634.037: 83.6315% ( 51) 00:12:21.336 16634.037 - 16739.316: 84.2565% ( 58) 00:12:21.336 16739.316 - 16844.594: 84.9030% ( 60) 00:12:21.336 16844.594 - 16949.873: 85.6034% ( 65) 00:12:21.336 16949.873 - 17055.152: 86.4224% ( 76) 00:12:21.336 17055.152 - 17160.431: 87.1552% ( 68) 00:12:21.336 17160.431 - 17265.709: 87.5862% ( 40) 00:12:21.336 17265.709 - 17370.988: 88.0388% ( 42) 00:12:21.336 17370.988 - 17476.267: 88.5991% ( 52) 00:12:21.336 17476.267 - 17581.545: 89.2672% ( 62) 00:12:21.336 17581.545 - 17686.824: 90.0000% ( 68) 00:12:21.336 17686.824 - 17792.103: 90.5927% ( 55) 00:12:21.336 17792.103 - 17897.382: 91.0884% ( 46) 00:12:21.336 17897.382 - 18002.660: 91.5086% ( 39) 00:12:21.336 18002.660 - 18107.939: 92.1121% ( 56) 00:12:21.336 18107.939 - 18213.218: 92.5647% ( 42) 00:12:21.336 18213.218 - 18318.496: 92.9849% ( 39) 00:12:21.336 18318.496 - 18423.775: 93.3513% ( 34) 00:12:21.336 18423.775 - 18529.054: 93.6422% ( 27) 00:12:21.336 18529.054 - 18634.333: 93.9332% ( 27) 00:12:21.336 18634.333 - 18739.611: 94.2241% ( 27) 00:12:21.336 18739.611 - 18844.890: 94.6767% ( 42) 00:12:21.336 18844.890 - 18950.169: 95.1940% ( 48) 00:12:21.336 18950.169 - 19055.447: 95.5172% ( 30) 00:12:21.336 19055.447 - 19160.726: 95.8405% ( 30) 00:12:21.336 19160.726 - 19266.005: 96.0776% ( 22) 00:12:21.336 19266.005 - 19371.284: 96.4009% ( 30) 00:12:21.336 19371.284 - 19476.562: 96.7565% ( 33) 00:12:21.336 19476.562 - 19581.841: 97.0582% ( 28) 00:12:21.337 19581.841 - 19687.120: 97.3060% ( 23) 00:12:21.337 19687.120 - 19792.398: 97.5216% ( 20) 00:12:21.337 19792.398 - 19897.677: 97.6832% ( 15) 00:12:21.337 19897.677 - 20002.956: 97.8664% ( 17) 00:12:21.337 20002.956 - 20108.235: 97.9957% ( 12) 00:12:21.337 20108.235 - 20213.513: 98.1466% ( 14) 00:12:21.337 20213.513 - 20318.792: 98.2759% ( 12) 00:12:21.337 20318.792 - 20424.071: 98.4052% ( 12) 00:12:21.337 20424.071 - 20529.349: 98.4591% ( 5) 00:12:21.337 20529.349 - 20634.628: 98.5129% ( 5) 00:12:21.337 20634.628 - 20739.907: 98.5668% ( 5) 00:12:21.337 20739.907 - 20845.186: 98.6207% ( 5) 00:12:21.337 33057.516 - 33268.074: 98.6853% ( 6) 00:12:21.337 33268.074 - 33478.631: 98.7716% ( 8) 00:12:21.337 33478.631 - 33689.189: 98.8470% ( 7) 00:12:21.337 33689.189 - 33899.746: 98.9332% ( 8) 00:12:21.337 33899.746 - 34110.304: 99.0194% ( 8) 00:12:21.337 34110.304 - 34320.861: 99.1056% ( 8) 00:12:21.337 34320.861 - 34531.418: 99.1918% ( 8) 00:12:21.337 34531.418 - 34741.976: 99.2672% ( 7) 00:12:21.337 34741.976 - 34952.533: 99.3103% ( 4) 00:12:21.337 41058.699 - 41269.256: 99.3211% ( 1) 00:12:21.337 41269.256 - 41479.814: 99.4073% ( 8) 00:12:21.337 41479.814 - 41690.371: 99.4935% ( 8) 00:12:21.337 41690.371 - 41900.929: 99.5690% ( 7) 00:12:21.337 41900.929 - 42111.486: 99.6552% ( 8) 00:12:21.337 42111.486 - 42322.043: 99.7414% ( 8) 00:12:21.337 42322.043 - 42532.601: 99.8276% ( 8) 00:12:21.337 42532.601 - 42743.158: 99.9138% ( 8) 00:12:21.337 42743.158 - 42953.716: 99.9892% ( 7) 00:12:21.337 42953.716 - 43164.273: 100.0000% ( 1) 00:12:21.337 00:12:21.337 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:21.337 ============================================================================== 00:12:21.337 Range in us Cumulative IO count 00:12:21.337 8685.494 - 8738.133: 0.0108% ( 1) 00:12:21.337 8843.412 - 8896.051: 0.0970% ( 8) 00:12:21.337 8896.051 - 8948.691: 0.2478% ( 14) 00:12:21.337 8948.691 - 9001.330: 0.4418% ( 18) 00:12:21.337 9001.330 - 9053.969: 0.7112% ( 25) 00:12:21.337 9053.969 - 9106.609: 1.1422% ( 40) 00:12:21.337 9106.609 - 9159.248: 1.7996% ( 61) 00:12:21.337 9159.248 - 9211.888: 2.1875% ( 36) 00:12:21.337 9211.888 - 9264.527: 2.8556% ( 62) 00:12:21.337 9264.527 - 9317.166: 3.3297% ( 44) 00:12:21.337 9317.166 - 9369.806: 3.9224% ( 55) 00:12:21.337 9369.806 - 9422.445: 4.7737% ( 79) 00:12:21.337 9422.445 - 9475.084: 5.3556% ( 54) 00:12:21.337 9475.084 - 9527.724: 6.0237% ( 62) 00:12:21.337 9527.724 - 9580.363: 6.6272% ( 56) 00:12:21.337 9580.363 - 9633.002: 7.2629% ( 59) 00:12:21.337 9633.002 - 9685.642: 7.9634% ( 65) 00:12:21.337 9685.642 - 9738.281: 8.7823% ( 76) 00:12:21.337 9738.281 - 9790.920: 9.3319% ( 51) 00:12:21.337 9790.920 - 9843.560: 9.7629% ( 40) 00:12:21.337 9843.560 - 9896.199: 10.1293% ( 34) 00:12:21.337 9896.199 - 9948.839: 10.3556% ( 21) 00:12:21.337 9948.839 - 10001.478: 10.6034% ( 23) 00:12:21.337 10001.478 - 10054.117: 10.8513% ( 23) 00:12:21.337 10054.117 - 10106.757: 11.0237% ( 16) 00:12:21.337 10106.757 - 10159.396: 11.3039% ( 26) 00:12:21.337 10159.396 - 10212.035: 11.5948% ( 27) 00:12:21.337 10212.035 - 10264.675: 12.1552% ( 52) 00:12:21.337 10264.675 - 10317.314: 12.5431% ( 36) 00:12:21.337 10317.314 - 10369.953: 13.0927% ( 51) 00:12:21.337 10369.953 - 10422.593: 13.6099% ( 48) 00:12:21.337 10422.593 - 10475.232: 14.2134% ( 56) 00:12:21.337 10475.232 - 10527.871: 15.1509% ( 87) 00:12:21.337 10527.871 - 10580.511: 16.0129% ( 80) 00:12:21.337 10580.511 - 10633.150: 16.4655% ( 42) 00:12:21.337 10633.150 - 10685.790: 17.0259% ( 52) 00:12:21.337 10685.790 - 10738.429: 17.6078% ( 54) 00:12:21.337 10738.429 - 10791.068: 18.0172% ( 38) 00:12:21.337 10791.068 - 10843.708: 18.5991% ( 54) 00:12:21.337 10843.708 - 10896.347: 19.1272% ( 49) 00:12:21.337 10896.347 - 10948.986: 19.6444% ( 48) 00:12:21.337 10948.986 - 11001.626: 20.1616% ( 48) 00:12:21.337 11001.626 - 11054.265: 20.5819% ( 39) 00:12:21.337 11054.265 - 11106.904: 20.9052% ( 30) 00:12:21.337 11106.904 - 11159.544: 21.2069% ( 28) 00:12:21.337 11159.544 - 11212.183: 21.5409% ( 31) 00:12:21.337 11212.183 - 11264.822: 22.0474% ( 47) 00:12:21.337 11264.822 - 11317.462: 22.4677% ( 39) 00:12:21.337 11317.462 - 11370.101: 22.8233% ( 33) 00:12:21.337 11370.101 - 11422.741: 23.1573% ( 31) 00:12:21.337 11422.741 - 11475.380: 23.5776% ( 39) 00:12:21.337 11475.380 - 11528.019: 24.0517% ( 44) 00:12:21.337 11528.019 - 11580.659: 24.7414% ( 64) 00:12:21.337 11580.659 - 11633.298: 25.2047% ( 43) 00:12:21.337 11633.298 - 11685.937: 25.5065% ( 28) 00:12:21.337 11685.937 - 11738.577: 25.7866% ( 26) 00:12:21.337 11738.577 - 11791.216: 26.0560% ( 25) 00:12:21.337 11791.216 - 11843.855: 26.2177% ( 15) 00:12:21.337 11843.855 - 11896.495: 26.3470% ( 12) 00:12:21.337 11896.495 - 11949.134: 26.4978% ( 14) 00:12:21.337 11949.134 - 12001.773: 26.6703% ( 16) 00:12:21.337 12001.773 - 12054.413: 26.7888% ( 11) 00:12:21.337 12054.413 - 12107.052: 26.9828% ( 18) 00:12:21.337 12107.052 - 12159.692: 27.1552% ( 16) 00:12:21.337 12159.692 - 12212.331: 27.4353% ( 26) 00:12:21.337 12212.331 - 12264.970: 27.6616% ( 21) 00:12:21.337 12264.970 - 12317.610: 27.8664% ( 19) 00:12:21.337 12317.610 - 12370.249: 28.1034% ( 22) 00:12:21.337 12370.249 - 12422.888: 28.4483% ( 32) 00:12:21.337 12422.888 - 12475.528: 28.7823% ( 31) 00:12:21.337 12475.528 - 12528.167: 29.1487% ( 34) 00:12:21.337 12528.167 - 12580.806: 29.6013% ( 42) 00:12:21.337 12580.806 - 12633.446: 29.9784% ( 35) 00:12:21.337 12633.446 - 12686.085: 30.5819% ( 56) 00:12:21.337 12686.085 - 12738.724: 31.2823% ( 65) 00:12:21.337 12738.724 - 12791.364: 32.0690% ( 73) 00:12:21.337 12791.364 - 12844.003: 33.1358% ( 99) 00:12:21.337 12844.003 - 12896.643: 34.3319% ( 111) 00:12:21.337 12896.643 - 12949.282: 36.1099% ( 165) 00:12:21.337 12949.282 - 13001.921: 37.2845% ( 109) 00:12:21.337 13001.921 - 13054.561: 38.9332% ( 153) 00:12:21.337 13054.561 - 13107.200: 40.6250% ( 157) 00:12:21.337 13107.200 - 13159.839: 42.0797% ( 135) 00:12:21.337 13159.839 - 13212.479: 43.7392% ( 154) 00:12:21.337 13212.479 - 13265.118: 44.9461% ( 112) 00:12:21.337 13265.118 - 13317.757: 46.9612% ( 187) 00:12:21.337 13317.757 - 13370.397: 48.2435% ( 119) 00:12:21.337 13370.397 - 13423.036: 49.5905% ( 125) 00:12:21.337 13423.036 - 13475.676: 51.1207% ( 142) 00:12:21.337 13475.676 - 13580.954: 54.3966% ( 304) 00:12:21.337 13580.954 - 13686.233: 57.3599% ( 275) 00:12:21.337 13686.233 - 13791.512: 59.6659% ( 214) 00:12:21.337 13791.512 - 13896.790: 62.4784% ( 261) 00:12:21.337 13896.790 - 14002.069: 63.9978% ( 141) 00:12:21.337 14002.069 - 14107.348: 65.6789% ( 156) 00:12:21.337 14107.348 - 14212.627: 67.3168% ( 152) 00:12:21.337 14212.627 - 14317.905: 68.4806% ( 108) 00:12:21.337 14317.905 - 14423.184: 69.4397% ( 89) 00:12:21.337 14423.184 - 14528.463: 70.2263% ( 73) 00:12:21.337 14528.463 - 14633.741: 71.1207% ( 83) 00:12:21.337 14633.741 - 14739.020: 71.9828% ( 80) 00:12:21.337 14739.020 - 14844.299: 72.7694% ( 73) 00:12:21.337 14844.299 - 14949.578: 73.4052% ( 59) 00:12:21.337 14949.578 - 15054.856: 73.9224% ( 48) 00:12:21.337 15054.856 - 15160.135: 74.5690% ( 60) 00:12:21.337 15160.135 - 15265.414: 74.9784% ( 38) 00:12:21.337 15265.414 - 15370.692: 75.5172% ( 50) 00:12:21.337 15370.692 - 15475.971: 76.1099% ( 55) 00:12:21.337 15475.971 - 15581.250: 77.0043% ( 83) 00:12:21.337 15581.250 - 15686.529: 77.6078% ( 56) 00:12:21.337 15686.529 - 15791.807: 78.3836% ( 72) 00:12:21.337 15791.807 - 15897.086: 79.2241% ( 78) 00:12:21.337 15897.086 - 16002.365: 79.9892% ( 71) 00:12:21.337 16002.365 - 16107.643: 80.8836% ( 83) 00:12:21.337 16107.643 - 16212.922: 81.9828% ( 102) 00:12:21.337 16212.922 - 16318.201: 82.6401% ( 61) 00:12:21.337 16318.201 - 16423.480: 83.2974% ( 61) 00:12:21.337 16423.480 - 16528.758: 83.8578% ( 52) 00:12:21.337 16528.758 - 16634.037: 84.5582% ( 65) 00:12:21.337 16634.037 - 16739.316: 85.2694% ( 66) 00:12:21.337 16739.316 - 16844.594: 85.9806% ( 66) 00:12:21.337 16844.594 - 16949.873: 86.8103% ( 77) 00:12:21.337 16949.873 - 17055.152: 87.5000% ( 64) 00:12:21.337 17055.152 - 17160.431: 88.1789% ( 63) 00:12:21.337 17160.431 - 17265.709: 89.0302% ( 79) 00:12:21.337 17265.709 - 17370.988: 89.7091% ( 63) 00:12:21.337 17370.988 - 17476.267: 90.3772% ( 62) 00:12:21.337 17476.267 - 17581.545: 90.9806% ( 56) 00:12:21.337 17581.545 - 17686.824: 91.4116% ( 40) 00:12:21.337 17686.824 - 17792.103: 91.8427% ( 40) 00:12:21.337 17792.103 - 17897.382: 92.2845% ( 41) 00:12:21.337 17897.382 - 18002.660: 92.7694% ( 45) 00:12:21.337 18002.660 - 18107.939: 93.3513% ( 54) 00:12:21.337 18107.939 - 18213.218: 93.8039% ( 42) 00:12:21.337 18213.218 - 18318.496: 94.0948% ( 27) 00:12:21.337 18318.496 - 18423.775: 94.4935% ( 37) 00:12:21.337 18423.775 - 18529.054: 94.8491% ( 33) 00:12:21.337 18529.054 - 18634.333: 95.0970% ( 23) 00:12:21.337 18634.333 - 18739.611: 95.4095% ( 29) 00:12:21.337 18739.611 - 18844.890: 95.8513% ( 41) 00:12:21.337 18844.890 - 18950.169: 96.2392% ( 36) 00:12:21.337 18950.169 - 19055.447: 96.4978% ( 24) 00:12:21.337 19055.447 - 19160.726: 96.7241% ( 21) 00:12:21.337 19160.726 - 19266.005: 96.9397% ( 20) 00:12:21.337 19266.005 - 19371.284: 97.1444% ( 19) 00:12:21.337 19371.284 - 19476.562: 97.4461% ( 28) 00:12:21.337 19476.562 - 19581.841: 97.6509% ( 19) 00:12:21.337 19581.841 - 19687.120: 97.8772% ( 21) 00:12:21.337 19687.120 - 19792.398: 98.0496% ( 16) 00:12:21.337 19792.398 - 19897.677: 98.1573% ( 10) 00:12:21.337 19897.677 - 20002.956: 98.3082% ( 14) 00:12:21.337 20002.956 - 20108.235: 98.3836% ( 7) 00:12:21.337 20108.235 - 20213.513: 98.4806% ( 9) 00:12:21.337 20213.513 - 20318.792: 98.5237% ( 4) 00:12:21.337 20318.792 - 20424.071: 98.5668% ( 4) 00:12:21.338 20424.071 - 20529.349: 98.6099% ( 4) 00:12:21.338 20529.349 - 20634.628: 98.6207% ( 1) 00:12:21.338 32215.287 - 32425.844: 98.6422% ( 2) 00:12:21.338 32425.844 - 32636.402: 98.7284% ( 8) 00:12:21.338 32636.402 - 32846.959: 98.8039% ( 7) 00:12:21.338 32846.959 - 33057.516: 98.8901% ( 8) 00:12:21.338 33057.516 - 33268.074: 98.9763% ( 8) 00:12:21.338 33268.074 - 33478.631: 99.0625% ( 8) 00:12:21.338 33478.631 - 33689.189: 99.1595% ( 9) 00:12:21.338 33689.189 - 33899.746: 99.2457% ( 8) 00:12:21.338 33899.746 - 34110.304: 99.3103% ( 6) 00:12:21.338 40216.469 - 40427.027: 99.3858% ( 7) 00:12:21.338 40427.027 - 40637.584: 99.4720% ( 8) 00:12:21.338 40637.584 - 40848.141: 99.5474% ( 7) 00:12:21.338 40848.141 - 41058.699: 99.6336% ( 8) 00:12:21.338 41058.699 - 41269.256: 99.7091% ( 7) 00:12:21.338 41269.256 - 41479.814: 99.7953% ( 8) 00:12:21.338 41479.814 - 41690.371: 99.8815% ( 8) 00:12:21.338 41690.371 - 41900.929: 99.9677% ( 8) 00:12:21.338 41900.929 - 42111.486: 100.0000% ( 3) 00:12:21.338 00:12:21.338 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:21.338 ============================================================================== 00:12:21.338 Range in us Cumulative IO count 00:12:21.338 8738.133 - 8790.773: 0.0216% ( 2) 00:12:21.338 8790.773 - 8843.412: 0.1293% ( 10) 00:12:21.338 8843.412 - 8896.051: 0.2478% ( 11) 00:12:21.338 8896.051 - 8948.691: 0.4095% ( 15) 00:12:21.338 8948.691 - 9001.330: 0.5280% ( 11) 00:12:21.338 9001.330 - 9053.969: 0.6466% ( 11) 00:12:21.338 9053.969 - 9106.609: 0.9052% ( 24) 00:12:21.338 9106.609 - 9159.248: 1.2392% ( 31) 00:12:21.338 9159.248 - 9211.888: 1.6918% ( 42) 00:12:21.338 9211.888 - 9264.527: 2.3060% ( 57) 00:12:21.338 9264.527 - 9317.166: 3.0388% ( 68) 00:12:21.338 9317.166 - 9369.806: 3.9547% ( 85) 00:12:21.338 9369.806 - 9422.445: 4.7629% ( 75) 00:12:21.338 9422.445 - 9475.084: 5.6789% ( 85) 00:12:21.338 9475.084 - 9527.724: 6.3254% ( 60) 00:12:21.338 9527.724 - 9580.363: 6.8642% ( 50) 00:12:21.338 9580.363 - 9633.002: 7.2414% ( 35) 00:12:21.338 9633.002 - 9685.642: 7.6509% ( 38) 00:12:21.338 9685.642 - 9738.281: 8.1250% ( 44) 00:12:21.338 9738.281 - 9790.920: 8.6422% ( 48) 00:12:21.338 9790.920 - 9843.560: 9.1379% ( 46) 00:12:21.338 9843.560 - 9896.199: 9.6444% ( 47) 00:12:21.338 9896.199 - 9948.839: 10.1509% ( 47) 00:12:21.338 9948.839 - 10001.478: 10.6681% ( 48) 00:12:21.338 10001.478 - 10054.117: 11.0022% ( 31) 00:12:21.338 10054.117 - 10106.757: 11.4009% ( 37) 00:12:21.338 10106.757 - 10159.396: 11.6810% ( 26) 00:12:21.338 10159.396 - 10212.035: 12.1875% ( 47) 00:12:21.338 10212.035 - 10264.675: 12.6940% ( 47) 00:12:21.338 10264.675 - 10317.314: 13.3297% ( 59) 00:12:21.338 10317.314 - 10369.953: 14.0086% ( 63) 00:12:21.338 10369.953 - 10422.593: 14.5259% ( 48) 00:12:21.338 10422.593 - 10475.232: 14.9138% ( 36) 00:12:21.338 10475.232 - 10527.871: 15.3233% ( 38) 00:12:21.338 10527.871 - 10580.511: 15.9159% ( 55) 00:12:21.338 10580.511 - 10633.150: 16.2500% ( 31) 00:12:21.338 10633.150 - 10685.790: 16.4978% ( 23) 00:12:21.338 10685.790 - 10738.429: 16.7672% ( 25) 00:12:21.338 10738.429 - 10791.068: 17.0797% ( 29) 00:12:21.338 10791.068 - 10843.708: 17.5754% ( 46) 00:12:21.338 10843.708 - 10896.347: 18.0603% ( 45) 00:12:21.338 10896.347 - 10948.986: 18.6961% ( 59) 00:12:21.338 10948.986 - 11001.626: 19.3750% ( 63) 00:12:21.338 11001.626 - 11054.265: 20.1401% ( 71) 00:12:21.338 11054.265 - 11106.904: 20.8836% ( 69) 00:12:21.338 11106.904 - 11159.544: 21.7349% ( 79) 00:12:21.338 11159.544 - 11212.183: 22.3384% ( 56) 00:12:21.338 11212.183 - 11264.822: 22.8341% ( 46) 00:12:21.338 11264.822 - 11317.462: 23.2866% ( 42) 00:12:21.338 11317.462 - 11370.101: 23.8470% ( 52) 00:12:21.338 11370.101 - 11422.741: 24.2349% ( 36) 00:12:21.338 11422.741 - 11475.380: 24.6444% ( 38) 00:12:21.338 11475.380 - 11528.019: 24.8599% ( 20) 00:12:21.338 11528.019 - 11580.659: 25.1401% ( 26) 00:12:21.338 11580.659 - 11633.298: 25.3772% ( 22) 00:12:21.338 11633.298 - 11685.937: 25.5388% ( 15) 00:12:21.338 11685.937 - 11738.577: 25.6250% ( 8) 00:12:21.338 11738.577 - 11791.216: 25.8082% ( 17) 00:12:21.338 11791.216 - 11843.855: 25.9698% ( 15) 00:12:21.338 11843.855 - 11896.495: 26.2177% ( 23) 00:12:21.338 11896.495 - 11949.134: 26.5086% ( 27) 00:12:21.338 11949.134 - 12001.773: 26.6918% ( 17) 00:12:21.338 12001.773 - 12054.413: 26.9073% ( 20) 00:12:21.338 12054.413 - 12107.052: 27.2091% ( 28) 00:12:21.338 12107.052 - 12159.692: 27.5323% ( 30) 00:12:21.338 12159.692 - 12212.331: 27.8664% ( 31) 00:12:21.338 12212.331 - 12264.970: 28.1573% ( 27) 00:12:21.338 12264.970 - 12317.610: 28.5022% ( 32) 00:12:21.338 12317.610 - 12370.249: 28.8362% ( 31) 00:12:21.338 12370.249 - 12422.888: 29.1810% ( 32) 00:12:21.338 12422.888 - 12475.528: 29.5259% ( 32) 00:12:21.338 12475.528 - 12528.167: 29.8276% ( 28) 00:12:21.338 12528.167 - 12580.806: 30.0108% ( 17) 00:12:21.338 12580.806 - 12633.446: 30.2909% ( 26) 00:12:21.338 12633.446 - 12686.085: 30.8190% ( 49) 00:12:21.338 12686.085 - 12738.724: 31.5086% ( 64) 00:12:21.338 12738.724 - 12791.364: 32.3491% ( 78) 00:12:21.338 12791.364 - 12844.003: 33.5237% ( 109) 00:12:21.338 12844.003 - 12896.643: 34.6228% ( 102) 00:12:21.338 12896.643 - 12949.282: 36.6164% ( 185) 00:12:21.338 12949.282 - 13001.921: 37.7263% ( 103) 00:12:21.338 13001.921 - 13054.561: 39.0409% ( 122) 00:12:21.338 13054.561 - 13107.200: 40.3879% ( 125) 00:12:21.338 13107.200 - 13159.839: 42.3060% ( 178) 00:12:21.338 13159.839 - 13212.479: 44.0733% ( 164) 00:12:21.338 13212.479 - 13265.118: 45.8944% ( 169) 00:12:21.338 13265.118 - 13317.757: 47.4677% ( 146) 00:12:21.338 13317.757 - 13370.397: 49.3750% ( 177) 00:12:21.338 13370.397 - 13423.036: 50.7112% ( 124) 00:12:21.338 13423.036 - 13475.676: 51.9073% ( 111) 00:12:21.338 13475.676 - 13580.954: 55.3556% ( 320) 00:12:21.338 13580.954 - 13686.233: 58.3513% ( 278) 00:12:21.338 13686.233 - 13791.512: 60.8082% ( 228) 00:12:21.338 13791.512 - 13896.790: 62.6724% ( 173) 00:12:21.338 13896.790 - 14002.069: 63.9763% ( 121) 00:12:21.338 14002.069 - 14107.348: 65.0323% ( 98) 00:12:21.338 14107.348 - 14212.627: 65.9806% ( 88) 00:12:21.338 14212.627 - 14317.905: 66.8427% ( 80) 00:12:21.338 14317.905 - 14423.184: 67.7909% ( 88) 00:12:21.338 14423.184 - 14528.463: 68.8901% ( 102) 00:12:21.338 14528.463 - 14633.741: 69.8276% ( 87) 00:12:21.338 14633.741 - 14739.020: 70.5496% ( 67) 00:12:21.338 14739.020 - 14844.299: 71.3362% ( 73) 00:12:21.338 14844.299 - 14949.578: 72.2629% ( 86) 00:12:21.338 14949.578 - 15054.856: 73.3728% ( 103) 00:12:21.338 15054.856 - 15160.135: 74.4181% ( 97) 00:12:21.338 15160.135 - 15265.414: 75.0970% ( 63) 00:12:21.338 15265.414 - 15370.692: 75.8082% ( 66) 00:12:21.338 15370.692 - 15475.971: 76.5086% ( 65) 00:12:21.338 15475.971 - 15581.250: 77.0043% ( 46) 00:12:21.338 15581.250 - 15686.529: 77.4569% ( 42) 00:12:21.338 15686.529 - 15791.807: 77.9741% ( 48) 00:12:21.338 15791.807 - 15897.086: 78.5991% ( 58) 00:12:21.338 15897.086 - 16002.365: 79.3858% ( 73) 00:12:21.338 16002.365 - 16107.643: 80.0108% ( 58) 00:12:21.338 16107.643 - 16212.922: 80.6034% ( 55) 00:12:21.338 16212.922 - 16318.201: 81.4978% ( 83) 00:12:21.338 16318.201 - 16423.480: 82.2414% ( 69) 00:12:21.338 16423.480 - 16528.758: 83.3728% ( 105) 00:12:21.338 16528.758 - 16634.037: 84.6552% ( 119) 00:12:21.338 16634.037 - 16739.316: 85.6466% ( 92) 00:12:21.338 16739.316 - 16844.594: 86.3039% ( 61) 00:12:21.338 16844.594 - 16949.873: 86.8642% ( 52) 00:12:21.338 16949.873 - 17055.152: 87.5647% ( 65) 00:12:21.338 17055.152 - 17160.431: 88.4591% ( 83) 00:12:21.338 17160.431 - 17265.709: 89.0517% ( 55) 00:12:21.338 17265.709 - 17370.988: 89.5474% ( 46) 00:12:21.338 17370.988 - 17476.267: 90.0000% ( 42) 00:12:21.338 17476.267 - 17581.545: 90.4526% ( 42) 00:12:21.338 17581.545 - 17686.824: 90.9159% ( 43) 00:12:21.338 17686.824 - 17792.103: 91.4116% ( 46) 00:12:21.338 17792.103 - 17897.382: 91.8750% ( 43) 00:12:21.338 17897.382 - 18002.660: 92.4784% ( 56) 00:12:21.338 18002.660 - 18107.939: 93.0065% ( 49) 00:12:21.338 18107.939 - 18213.218: 93.6315% ( 58) 00:12:21.338 18213.218 - 18318.496: 94.1918% ( 52) 00:12:21.338 18318.496 - 18423.775: 94.6659% ( 44) 00:12:21.338 18423.775 - 18529.054: 95.0647% ( 37) 00:12:21.338 18529.054 - 18634.333: 95.4634% ( 37) 00:12:21.338 18634.333 - 18739.611: 95.8190% ( 33) 00:12:21.338 18739.611 - 18844.890: 96.1746% ( 33) 00:12:21.338 18844.890 - 18950.169: 96.7349% ( 52) 00:12:21.338 18950.169 - 19055.447: 97.0151% ( 26) 00:12:21.338 19055.447 - 19160.726: 97.2629% ( 23) 00:12:21.338 19160.726 - 19266.005: 97.4569% ( 18) 00:12:21.338 19266.005 - 19371.284: 97.6078% ( 14) 00:12:21.338 19371.284 - 19476.562: 97.6940% ( 8) 00:12:21.338 19476.562 - 19581.841: 97.7694% ( 7) 00:12:21.338 19581.841 - 19687.120: 97.8556% ( 8) 00:12:21.338 19687.120 - 19792.398: 97.9741% ( 11) 00:12:21.338 19792.398 - 19897.677: 98.0819% ( 10) 00:12:21.338 19897.677 - 20002.956: 98.1250% ( 4) 00:12:21.338 20002.956 - 20108.235: 98.1897% ( 6) 00:12:21.338 20108.235 - 20213.513: 98.3190% ( 12) 00:12:21.338 20213.513 - 20318.792: 98.4914% ( 16) 00:12:21.338 20318.792 - 20424.071: 98.5560% ( 6) 00:12:21.338 20424.071 - 20529.349: 98.5991% ( 4) 00:12:21.338 20529.349 - 20634.628: 98.6207% ( 2) 00:12:21.338 30741.385 - 30951.942: 98.7069% ( 8) 00:12:21.338 30951.942 - 31162.500: 98.7931% ( 8) 00:12:21.338 31162.500 - 31373.057: 98.8793% ( 8) 00:12:21.338 31373.057 - 31583.614: 98.9655% ( 8) 00:12:21.338 31583.614 - 31794.172: 99.0517% ( 8) 00:12:21.338 31794.172 - 32004.729: 99.1379% ( 8) 00:12:21.338 32004.729 - 32215.287: 99.2241% ( 8) 00:12:21.338 32215.287 - 32425.844: 99.3103% ( 8) 00:12:21.338 38321.452 - 38532.010: 99.3966% ( 8) 00:12:21.338 38532.010 - 38742.567: 99.4828% ( 8) 00:12:21.339 38742.567 - 38953.124: 99.5690% ( 8) 00:12:21.339 38953.124 - 39163.682: 99.6444% ( 7) 00:12:21.339 39163.682 - 39374.239: 99.7306% ( 8) 00:12:21.339 39374.239 - 39584.797: 99.8168% ( 8) 00:12:21.339 39584.797 - 39795.354: 99.9030% ( 8) 00:12:21.339 39795.354 - 40005.912: 99.9892% ( 8) 00:12:21.339 40005.912 - 40216.469: 100.0000% ( 1) 00:12:21.339 00:12:21.339 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:21.339 ============================================================================== 00:12:21.339 Range in us Cumulative IO count 00:12:21.339 8369.658 - 8422.297: 0.0108% ( 1) 00:12:21.339 8422.297 - 8474.937: 0.0323% ( 2) 00:12:21.339 8474.937 - 8527.576: 0.0970% ( 6) 00:12:21.339 8527.576 - 8580.215: 0.1616% ( 6) 00:12:21.339 8580.215 - 8632.855: 0.2694% ( 10) 00:12:21.339 8632.855 - 8685.494: 0.4310% ( 15) 00:12:21.339 8685.494 - 8738.133: 0.5172% ( 8) 00:12:21.339 8738.133 - 8790.773: 0.5819% ( 6) 00:12:21.339 8790.773 - 8843.412: 0.6789% ( 9) 00:12:21.339 8843.412 - 8896.051: 0.7543% ( 7) 00:12:21.339 8896.051 - 8948.691: 0.8728% ( 11) 00:12:21.339 8948.691 - 9001.330: 0.9806% ( 10) 00:12:21.339 9001.330 - 9053.969: 1.2069% ( 21) 00:12:21.339 9053.969 - 9106.609: 1.4440% ( 22) 00:12:21.339 9106.609 - 9159.248: 1.7241% ( 26) 00:12:21.339 9159.248 - 9211.888: 2.0582% ( 31) 00:12:21.339 9211.888 - 9264.527: 2.5647% ( 47) 00:12:21.339 9264.527 - 9317.166: 3.2974% ( 68) 00:12:21.339 9317.166 - 9369.806: 3.8793% ( 54) 00:12:21.339 9369.806 - 9422.445: 4.6552% ( 72) 00:12:21.339 9422.445 - 9475.084: 5.6034% ( 88) 00:12:21.339 9475.084 - 9527.724: 6.2608% ( 61) 00:12:21.339 9527.724 - 9580.363: 6.8427% ( 54) 00:12:21.339 9580.363 - 9633.002: 7.4784% ( 59) 00:12:21.339 9633.002 - 9685.642: 7.8341% ( 33) 00:12:21.339 9685.642 - 9738.281: 8.2543% ( 39) 00:12:21.339 9738.281 - 9790.920: 8.6315% ( 35) 00:12:21.339 9790.920 - 9843.560: 8.9547% ( 30) 00:12:21.339 9843.560 - 9896.199: 9.4181% ( 43) 00:12:21.339 9896.199 - 9948.839: 9.9677% ( 51) 00:12:21.339 9948.839 - 10001.478: 10.3772% ( 38) 00:12:21.339 10001.478 - 10054.117: 10.7759% ( 37) 00:12:21.339 10054.117 - 10106.757: 11.0560% ( 26) 00:12:21.339 10106.757 - 10159.396: 11.3901% ( 31) 00:12:21.339 10159.396 - 10212.035: 11.7996% ( 38) 00:12:21.339 10212.035 - 10264.675: 12.1767% ( 35) 00:12:21.339 10264.675 - 10317.314: 12.6509% ( 44) 00:12:21.339 10317.314 - 10369.953: 13.2220% ( 53) 00:12:21.339 10369.953 - 10422.593: 13.7823% ( 52) 00:12:21.339 10422.593 - 10475.232: 14.1056% ( 30) 00:12:21.339 10475.232 - 10527.871: 14.3534% ( 23) 00:12:21.339 10527.871 - 10580.511: 14.6875% ( 31) 00:12:21.339 10580.511 - 10633.150: 15.3341% ( 60) 00:12:21.339 10633.150 - 10685.790: 16.0129% ( 63) 00:12:21.339 10685.790 - 10738.429: 16.6272% ( 57) 00:12:21.339 10738.429 - 10791.068: 17.5862% ( 89) 00:12:21.339 10791.068 - 10843.708: 18.3190% ( 68) 00:12:21.339 10843.708 - 10896.347: 19.0302% ( 66) 00:12:21.339 10896.347 - 10948.986: 20.0000% ( 90) 00:12:21.339 10948.986 - 11001.626: 20.5496% ( 51) 00:12:21.339 11001.626 - 11054.265: 21.2931% ( 69) 00:12:21.339 11054.265 - 11106.904: 21.8319% ( 50) 00:12:21.339 11106.904 - 11159.544: 22.2091% ( 35) 00:12:21.339 11159.544 - 11212.183: 22.5970% ( 36) 00:12:21.339 11212.183 - 11264.822: 22.9849% ( 36) 00:12:21.339 11264.822 - 11317.462: 23.3728% ( 36) 00:12:21.339 11317.462 - 11370.101: 23.8147% ( 41) 00:12:21.339 11370.101 - 11422.741: 24.3103% ( 46) 00:12:21.339 11422.741 - 11475.380: 24.6875% ( 35) 00:12:21.339 11475.380 - 11528.019: 25.1509% ( 43) 00:12:21.339 11528.019 - 11580.659: 25.4418% ( 27) 00:12:21.339 11580.659 - 11633.298: 25.6358% ( 18) 00:12:21.339 11633.298 - 11685.937: 25.7112% ( 7) 00:12:21.339 11685.937 - 11738.577: 25.7651% ( 5) 00:12:21.339 11738.577 - 11791.216: 25.8513% ( 8) 00:12:21.339 11791.216 - 11843.855: 25.9806% ( 12) 00:12:21.339 11843.855 - 11896.495: 26.0668% ( 8) 00:12:21.339 11896.495 - 11949.134: 26.2823% ( 20) 00:12:21.339 11949.134 - 12001.773: 26.6703% ( 36) 00:12:21.339 12001.773 - 12054.413: 27.2522% ( 54) 00:12:21.339 12054.413 - 12107.052: 27.6293% ( 35) 00:12:21.339 12107.052 - 12159.692: 28.0603% ( 40) 00:12:21.339 12159.692 - 12212.331: 28.3621% ( 28) 00:12:21.339 12212.331 - 12264.970: 28.6530% ( 27) 00:12:21.339 12264.970 - 12317.610: 29.0302% ( 35) 00:12:21.339 12317.610 - 12370.249: 29.3211% ( 27) 00:12:21.339 12370.249 - 12422.888: 29.5366% ( 20) 00:12:21.339 12422.888 - 12475.528: 29.7198% ( 17) 00:12:21.339 12475.528 - 12528.167: 29.9353% ( 20) 00:12:21.339 12528.167 - 12580.806: 30.1832% ( 23) 00:12:21.339 12580.806 - 12633.446: 30.4095% ( 21) 00:12:21.339 12633.446 - 12686.085: 30.8082% ( 37) 00:12:21.339 12686.085 - 12738.724: 31.3793% ( 53) 00:12:21.339 12738.724 - 12791.364: 32.0582% ( 63) 00:12:21.339 12791.364 - 12844.003: 32.9310% ( 81) 00:12:21.339 12844.003 - 12896.643: 34.1595% ( 114) 00:12:21.339 12896.643 - 12949.282: 35.8513% ( 157) 00:12:21.339 12949.282 - 13001.921: 37.4030% ( 144) 00:12:21.339 13001.921 - 13054.561: 39.1379% ( 161) 00:12:21.339 13054.561 - 13107.200: 40.5388% ( 130) 00:12:21.339 13107.200 - 13159.839: 41.8966% ( 126) 00:12:21.339 13159.839 - 13212.479: 43.7823% ( 175) 00:12:21.339 13212.479 - 13265.118: 45.4526% ( 155) 00:12:21.339 13265.118 - 13317.757: 46.8427% ( 129) 00:12:21.339 13317.757 - 13370.397: 48.2651% ( 132) 00:12:21.339 13370.397 - 13423.036: 50.3017% ( 189) 00:12:21.339 13423.036 - 13475.676: 52.0151% ( 159) 00:12:21.339 13475.676 - 13580.954: 55.1401% ( 290) 00:12:21.339 13580.954 - 13686.233: 58.7284% ( 333) 00:12:21.339 13686.233 - 13791.512: 60.8728% ( 199) 00:12:21.339 13791.512 - 13896.790: 62.4138% ( 143) 00:12:21.339 13896.790 - 14002.069: 63.6099% ( 111) 00:12:21.339 14002.069 - 14107.348: 64.8276% ( 113) 00:12:21.339 14107.348 - 14212.627: 65.7651% ( 87) 00:12:21.339 14212.627 - 14317.905: 66.4332% ( 62) 00:12:21.339 14317.905 - 14423.184: 67.3922% ( 89) 00:12:21.339 14423.184 - 14528.463: 68.2866% ( 83) 00:12:21.339 14528.463 - 14633.741: 69.0194% ( 68) 00:12:21.339 14633.741 - 14739.020: 70.1293% ( 103) 00:12:21.339 14739.020 - 14844.299: 71.1530% ( 95) 00:12:21.339 14844.299 - 14949.578: 72.0582% ( 84) 00:12:21.339 14949.578 - 15054.856: 73.0065% ( 88) 00:12:21.339 15054.856 - 15160.135: 74.0409% ( 96) 00:12:21.339 15160.135 - 15265.414: 74.8491% ( 75) 00:12:21.339 15265.414 - 15370.692: 75.5603% ( 66) 00:12:21.339 15370.692 - 15475.971: 76.4009% ( 78) 00:12:21.339 15475.971 - 15581.250: 77.2629% ( 80) 00:12:21.339 15581.250 - 15686.529: 78.0711% ( 75) 00:12:21.339 15686.529 - 15791.807: 78.7608% ( 64) 00:12:21.339 15791.807 - 15897.086: 79.3858% ( 58) 00:12:21.339 15897.086 - 16002.365: 79.9246% ( 50) 00:12:21.339 16002.365 - 16107.643: 80.6034% ( 63) 00:12:21.339 16107.643 - 16212.922: 81.1746% ( 53) 00:12:21.339 16212.922 - 16318.201: 81.7457% ( 53) 00:12:21.339 16318.201 - 16423.480: 82.2522% ( 47) 00:12:21.339 16423.480 - 16528.758: 82.8125% ( 52) 00:12:21.339 16528.758 - 16634.037: 83.2220% ( 38) 00:12:21.339 16634.037 - 16739.316: 83.6961% ( 44) 00:12:21.339 16739.316 - 16844.594: 84.2457% ( 51) 00:12:21.339 16844.594 - 16949.873: 85.0000% ( 70) 00:12:21.339 16949.873 - 17055.152: 86.1315% ( 105) 00:12:21.339 17055.152 - 17160.431: 87.2198% ( 101) 00:12:21.339 17160.431 - 17265.709: 88.1897% ( 90) 00:12:21.339 17265.709 - 17370.988: 88.9763% ( 73) 00:12:21.339 17370.988 - 17476.267: 89.8815% ( 84) 00:12:21.339 17476.267 - 17581.545: 90.7759% ( 83) 00:12:21.339 17581.545 - 17686.824: 91.3362% ( 52) 00:12:21.339 17686.824 - 17792.103: 92.0043% ( 62) 00:12:21.339 17792.103 - 17897.382: 92.5754% ( 53) 00:12:21.339 17897.382 - 18002.660: 93.1250% ( 51) 00:12:21.339 18002.660 - 18107.939: 93.6207% ( 46) 00:12:21.339 18107.939 - 18213.218: 94.1487% ( 49) 00:12:21.339 18213.218 - 18318.496: 94.6228% ( 44) 00:12:21.339 18318.496 - 18423.775: 95.0539% ( 40) 00:12:21.339 18423.775 - 18529.054: 95.4849% ( 40) 00:12:21.339 18529.054 - 18634.333: 95.8621% ( 35) 00:12:21.339 18634.333 - 18739.611: 96.4116% ( 51) 00:12:21.339 18739.611 - 18844.890: 96.9612% ( 51) 00:12:21.339 18844.890 - 18950.169: 97.3384% ( 35) 00:12:21.339 18950.169 - 19055.447: 97.5539% ( 20) 00:12:21.339 19055.447 - 19160.726: 97.7155% ( 15) 00:12:21.339 19160.726 - 19266.005: 97.7802% ( 6) 00:12:21.340 19266.005 - 19371.284: 97.8233% ( 4) 00:12:21.340 19371.284 - 19476.562: 97.8556% ( 3) 00:12:21.340 19476.562 - 19581.841: 97.8772% ( 2) 00:12:21.340 19581.841 - 19687.120: 97.9095% ( 3) 00:12:21.340 19687.120 - 19792.398: 97.9418% ( 3) 00:12:21.340 19792.398 - 19897.677: 98.0065% ( 6) 00:12:21.340 19897.677 - 20002.956: 98.0711% ( 6) 00:12:21.340 20002.956 - 20108.235: 98.1250% ( 5) 00:12:21.340 20108.235 - 20213.513: 98.3728% ( 23) 00:12:21.340 20213.513 - 20318.792: 98.5453% ( 16) 00:12:21.340 20318.792 - 20424.071: 98.6099% ( 6) 00:12:21.340 20424.071 - 20529.349: 98.6207% ( 1) 00:12:21.340 29478.040 - 29688.598: 98.6961% ( 7) 00:12:21.340 29688.598 - 29899.155: 98.7823% ( 8) 00:12:21.340 29899.155 - 30109.712: 98.8685% ( 8) 00:12:21.340 30109.712 - 30320.270: 98.9547% ( 8) 00:12:21.340 30320.270 - 30530.827: 99.0409% ( 8) 00:12:21.340 30530.827 - 30741.385: 99.1379% ( 9) 00:12:21.340 30741.385 - 30951.942: 99.2241% ( 8) 00:12:21.340 30951.942 - 31162.500: 99.3103% ( 8) 00:12:21.340 36636.993 - 36847.550: 99.3642% ( 5) 00:12:21.340 36847.550 - 37058.108: 99.4504% ( 8) 00:12:21.340 37058.108 - 37268.665: 99.5259% ( 7) 00:12:21.340 37268.665 - 37479.222: 99.6121% ( 8) 00:12:21.340 37479.222 - 37689.780: 99.6983% ( 8) 00:12:21.340 37689.780 - 37900.337: 99.7845% ( 8) 00:12:21.340 37900.337 - 38110.895: 99.8707% ( 8) 00:12:21.340 38110.895 - 38321.452: 99.9569% ( 8) 00:12:21.340 38321.452 - 38532.010: 100.0000% ( 4) 00:12:21.340 00:12:21.340 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:21.340 ============================================================================== 00:12:21.340 Range in us Cumulative IO count 00:12:21.340 8369.658 - 8422.297: 0.0535% ( 5) 00:12:21.340 8422.297 - 8474.937: 0.1284% ( 7) 00:12:21.340 8474.937 - 8527.576: 0.2033% ( 7) 00:12:21.340 8527.576 - 8580.215: 0.4174% ( 20) 00:12:21.340 8580.215 - 8632.855: 0.5137% ( 9) 00:12:21.340 8632.855 - 8685.494: 0.5779% ( 6) 00:12:21.340 8685.494 - 8738.133: 0.6421% ( 6) 00:12:21.340 8738.133 - 8790.773: 0.6849% ( 4) 00:12:21.340 8843.412 - 8896.051: 0.7277% ( 4) 00:12:21.340 8896.051 - 8948.691: 0.8241% ( 9) 00:12:21.340 8948.691 - 9001.330: 0.9311% ( 10) 00:12:21.340 9001.330 - 9053.969: 1.1344% ( 19) 00:12:21.340 9053.969 - 9106.609: 1.3164% ( 17) 00:12:21.340 9106.609 - 9159.248: 1.6374% ( 30) 00:12:21.340 9159.248 - 9211.888: 2.0120% ( 35) 00:12:21.340 9211.888 - 9264.527: 2.5792% ( 53) 00:12:21.340 9264.527 - 9317.166: 3.3176% ( 69) 00:12:21.340 9317.166 - 9369.806: 3.9384% ( 58) 00:12:21.340 9369.806 - 9422.445: 4.9229% ( 92) 00:12:21.340 9422.445 - 9475.084: 5.6293% ( 66) 00:12:21.340 9475.084 - 9527.724: 6.2928% ( 62) 00:12:21.340 9527.724 - 9580.363: 6.8814% ( 55) 00:12:21.340 9580.363 - 9633.002: 7.4914% ( 57) 00:12:21.340 9633.002 - 9685.642: 7.9623% ( 44) 00:12:21.340 9685.642 - 9738.281: 8.5938% ( 59) 00:12:21.340 9738.281 - 9790.920: 9.2145% ( 58) 00:12:21.340 9790.920 - 9843.560: 9.6318% ( 39) 00:12:21.340 9843.560 - 9896.199: 10.1134% ( 45) 00:12:21.340 9896.199 - 9948.839: 10.4773% ( 34) 00:12:21.340 9948.839 - 10001.478: 11.1194% ( 60) 00:12:21.340 10001.478 - 10054.117: 11.4940% ( 35) 00:12:21.340 10054.117 - 10106.757: 11.7937% ( 28) 00:12:21.340 10106.757 - 10159.396: 12.1682% ( 35) 00:12:21.340 10159.396 - 10212.035: 12.6177% ( 42) 00:12:21.340 10212.035 - 10264.675: 13.2170% ( 56) 00:12:21.340 10264.675 - 10317.314: 13.5809% ( 34) 00:12:21.340 10317.314 - 10369.953: 13.8913% ( 29) 00:12:21.340 10369.953 - 10422.593: 14.2979% ( 38) 00:12:21.340 10422.593 - 10475.232: 14.6939% ( 37) 00:12:21.340 10475.232 - 10527.871: 15.1969% ( 47) 00:12:21.340 10527.871 - 10580.511: 15.6357% ( 41) 00:12:21.340 10580.511 - 10633.150: 16.0852% ( 42) 00:12:21.340 10633.150 - 10685.790: 16.4062% ( 30) 00:12:21.340 10685.790 - 10738.429: 16.8343% ( 40) 00:12:21.340 10738.429 - 10791.068: 17.2624% ( 40) 00:12:21.340 10791.068 - 10843.708: 17.8082% ( 51) 00:12:21.340 10843.708 - 10896.347: 18.7072% ( 84) 00:12:21.340 10896.347 - 10948.986: 19.2530% ( 51) 00:12:21.340 10948.986 - 11001.626: 19.7239% ( 44) 00:12:21.340 11001.626 - 11054.265: 20.1948% ( 44) 00:12:21.340 11054.265 - 11106.904: 20.8369% ( 60) 00:12:21.340 11106.904 - 11159.544: 21.3506% ( 48) 00:12:21.340 11159.544 - 11212.183: 21.9178% ( 53) 00:12:21.340 11212.183 - 11264.822: 22.5171% ( 56) 00:12:21.340 11264.822 - 11317.462: 23.0736% ( 52) 00:12:21.340 11317.462 - 11370.101: 23.6836% ( 57) 00:12:21.340 11370.101 - 11422.741: 24.0368% ( 33) 00:12:21.340 11422.741 - 11475.380: 24.4114% ( 35) 00:12:21.340 11475.380 - 11528.019: 24.7324% ( 30) 00:12:21.340 11528.019 - 11580.659: 25.1605% ( 40) 00:12:21.340 11580.659 - 11633.298: 25.4709% ( 29) 00:12:21.340 11633.298 - 11685.937: 25.7277% ( 24) 00:12:21.340 11685.937 - 11738.577: 25.8990% ( 16) 00:12:21.340 11738.577 - 11791.216: 26.1558% ( 24) 00:12:21.340 11791.216 - 11843.855: 26.4020% ( 23) 00:12:21.340 11843.855 - 11896.495: 26.6802% ( 26) 00:12:21.340 11896.495 - 11949.134: 26.9585% ( 26) 00:12:21.340 11949.134 - 12001.773: 27.2367% ( 26) 00:12:21.340 12001.773 - 12054.413: 27.6220% ( 36) 00:12:21.340 12054.413 - 12107.052: 27.8682% ( 23) 00:12:21.340 12107.052 - 12159.692: 28.1036% ( 22) 00:12:21.340 12159.692 - 12212.331: 28.3069% ( 19) 00:12:21.340 12212.331 - 12264.970: 28.5210% ( 20) 00:12:21.340 12264.970 - 12317.610: 28.8313% ( 29) 00:12:21.340 12317.610 - 12370.249: 29.1952% ( 34) 00:12:21.340 12370.249 - 12422.888: 29.5163% ( 30) 00:12:21.340 12422.888 - 12475.528: 29.9979% ( 45) 00:12:21.340 12475.528 - 12528.167: 30.4795% ( 45) 00:12:21.340 12528.167 - 12580.806: 31.0895% ( 57) 00:12:21.340 12580.806 - 12633.446: 31.7316% ( 60) 00:12:21.340 12633.446 - 12686.085: 32.2774% ( 51) 00:12:21.340 12686.085 - 12738.724: 32.9088% ( 59) 00:12:21.340 12738.724 - 12791.364: 33.7115% ( 75) 00:12:21.340 12791.364 - 12844.003: 34.5676% ( 80) 00:12:21.340 12844.003 - 12896.643: 35.6914% ( 105) 00:12:21.340 12896.643 - 12949.282: 37.2539% ( 146) 00:12:21.340 12949.282 - 13001.921: 38.4418% ( 111) 00:12:21.340 13001.921 - 13054.561: 39.8545% ( 132) 00:12:21.340 13054.561 - 13107.200: 41.4919% ( 153) 00:12:21.340 13107.200 - 13159.839: 42.9259% ( 134) 00:12:21.340 13159.839 - 13212.479: 44.5098% ( 148) 00:12:21.340 13212.479 - 13265.118: 45.7513% ( 116) 00:12:21.340 13265.118 - 13317.757: 47.1533% ( 131) 00:12:21.340 13317.757 - 13370.397: 48.8014% ( 154) 00:12:21.340 13370.397 - 13423.036: 50.7491% ( 182) 00:12:21.340 13423.036 - 13475.676: 52.1618% ( 132) 00:12:21.340 13475.676 - 13580.954: 55.3510% ( 298) 00:12:21.340 13580.954 - 13686.233: 58.2192% ( 268) 00:12:21.340 13686.233 - 13791.512: 60.4024% ( 204) 00:12:21.340 13791.512 - 13896.790: 62.0612% ( 155) 00:12:21.340 13896.790 - 14002.069: 63.6772% ( 151) 00:12:21.340 14002.069 - 14107.348: 64.4264% ( 70) 00:12:21.340 14107.348 - 14212.627: 65.0578% ( 59) 00:12:21.340 14212.627 - 14317.905: 65.6143% ( 52) 00:12:21.340 14317.905 - 14423.184: 66.2350% ( 58) 00:12:21.340 14423.184 - 14528.463: 67.0591% ( 77) 00:12:21.340 14528.463 - 14633.741: 67.9688% ( 85) 00:12:21.340 14633.741 - 14739.020: 69.4028% ( 134) 00:12:21.340 14739.020 - 14844.299: 70.7941% ( 130) 00:12:21.340 14844.299 - 14949.578: 72.3459% ( 145) 00:12:21.340 14949.578 - 15054.856: 73.5552% ( 113) 00:12:21.340 15054.856 - 15160.135: 74.6682% ( 104) 00:12:21.340 15160.135 - 15265.414: 75.3532% ( 64) 00:12:21.340 15265.414 - 15370.692: 76.0381% ( 64) 00:12:21.340 15370.692 - 15475.971: 76.6481% ( 57) 00:12:21.340 15475.971 - 15581.250: 77.2902% ( 60) 00:12:21.340 15581.250 - 15686.529: 77.9431% ( 61) 00:12:21.340 15686.529 - 15791.807: 78.5745% ( 59) 00:12:21.340 15791.807 - 15897.086: 79.1845% ( 57) 00:12:21.340 15897.086 - 16002.365: 79.7731% ( 55) 00:12:21.340 16002.365 - 16107.643: 80.6186% ( 79) 00:12:21.340 16107.643 - 16212.922: 81.1858% ( 53) 00:12:21.340 16212.922 - 16318.201: 81.7851% ( 56) 00:12:21.340 16318.201 - 16423.480: 82.3416% ( 52) 00:12:21.340 16423.480 - 16528.758: 82.9088% ( 53) 00:12:21.340 16528.758 - 16634.037: 83.5188% ( 57) 00:12:21.340 16634.037 - 16739.316: 84.1289% ( 57) 00:12:21.340 16739.316 - 16844.594: 84.4713% ( 32) 00:12:21.340 16844.594 - 16949.873: 84.9208% ( 42) 00:12:21.340 16949.873 - 17055.152: 85.3596% ( 41) 00:12:21.340 17055.152 - 17160.431: 85.8733% ( 48) 00:12:21.340 17160.431 - 17265.709: 86.5796% ( 66) 00:12:21.340 17265.709 - 17370.988: 87.3716% ( 74) 00:12:21.340 17370.988 - 17476.267: 88.2598% ( 83) 00:12:21.340 17476.267 - 17581.545: 89.2551% ( 93) 00:12:21.340 17581.545 - 17686.824: 90.4217% ( 109) 00:12:21.340 17686.824 - 17792.103: 91.4170% ( 93) 00:12:21.340 17792.103 - 17897.382: 92.3266% ( 85) 00:12:21.340 17897.382 - 18002.660: 93.0972% ( 72) 00:12:21.340 18002.660 - 18107.939: 94.0068% ( 85) 00:12:21.340 18107.939 - 18213.218: 94.6704% ( 62) 00:12:21.340 18213.218 - 18318.496: 95.2697% ( 56) 00:12:21.340 18318.496 - 18423.775: 95.7834% ( 48) 00:12:21.340 18423.775 - 18529.054: 96.2115% ( 40) 00:12:21.340 18529.054 - 18634.333: 96.5218% ( 29) 00:12:21.340 18634.333 - 18739.611: 96.9713% ( 42) 00:12:21.340 18739.611 - 18844.890: 97.1747% ( 19) 00:12:21.340 18844.890 - 18950.169: 97.3138% ( 13) 00:12:21.340 18950.169 - 19055.447: 97.6777% ( 34) 00:12:21.340 19055.447 - 19160.726: 97.7633% ( 8) 00:12:21.340 19160.726 - 19266.005: 97.8275% ( 6) 00:12:21.340 19266.005 - 19371.284: 97.9024% ( 7) 00:12:21.340 19371.284 - 19476.562: 97.9345% ( 3) 00:12:21.340 19476.562 - 19581.841: 97.9452% ( 1) 00:12:21.340 19897.677 - 20002.956: 98.0201% ( 7) 00:12:21.340 20002.956 - 20108.235: 98.1485% ( 12) 00:12:21.340 20108.235 - 20213.513: 98.2021% ( 5) 00:12:21.340 20213.513 - 20318.792: 98.2128% ( 1) 00:12:21.340 20318.792 - 20424.071: 98.2556% ( 4) 00:12:21.341 20424.071 - 20529.349: 98.3091% ( 5) 00:12:21.341 20529.349 - 20634.628: 98.3626% ( 5) 00:12:21.341 20634.628 - 20739.907: 98.4161% ( 5) 00:12:21.341 20739.907 - 20845.186: 98.4696% ( 5) 00:12:21.341 20845.186 - 20950.464: 98.5124% ( 4) 00:12:21.341 20950.464 - 21055.743: 98.5552% ( 4) 00:12:21.341 21055.743 - 21161.022: 98.5980% ( 4) 00:12:21.341 21161.022 - 21266.300: 98.6408% ( 4) 00:12:21.341 21266.300 - 21371.579: 98.6836% ( 4) 00:12:21.341 21371.579 - 21476.858: 98.7265% ( 4) 00:12:21.341 21476.858 - 21582.137: 98.7693% ( 4) 00:12:21.341 21582.137 - 21687.415: 98.8121% ( 4) 00:12:21.341 21687.415 - 21792.694: 98.8549% ( 4) 00:12:21.341 21792.694 - 21897.973: 98.8977% ( 4) 00:12:21.341 21897.973 - 22003.251: 98.9405% ( 4) 00:12:21.341 22003.251 - 22108.530: 98.9833% ( 4) 00:12:21.341 22108.530 - 22213.809: 99.0261% ( 4) 00:12:21.341 22213.809 - 22319.088: 99.0689% ( 4) 00:12:21.341 22319.088 - 22424.366: 99.1224% ( 5) 00:12:21.341 22424.366 - 22529.645: 99.1652% ( 4) 00:12:21.341 22529.645 - 22634.924: 99.2080% ( 4) 00:12:21.341 22634.924 - 22740.202: 99.2616% ( 5) 00:12:21.341 22740.202 - 22845.481: 99.3044% ( 4) 00:12:21.341 22845.481 - 22950.760: 99.3151% ( 1) 00:12:21.341 29056.925 - 29267.483: 99.3579% ( 4) 00:12:21.341 29267.483 - 29478.040: 99.4542% ( 9) 00:12:21.341 29478.040 - 29688.598: 99.5398% ( 8) 00:12:21.341 29688.598 - 29899.155: 99.6147% ( 7) 00:12:21.341 29899.155 - 30109.712: 99.7003% ( 8) 00:12:21.341 30109.712 - 30320.270: 99.7860% ( 8) 00:12:21.341 30320.270 - 30530.827: 99.8716% ( 8) 00:12:21.341 30530.827 - 30741.385: 99.9572% ( 8) 00:12:21.341 30741.385 - 30951.942: 100.0000% ( 4) 00:12:21.341 00:12:21.341 10:17:28 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:21.341 00:12:21.341 real 0m2.691s 00:12:21.341 user 0m2.276s 00:12:21.341 sys 0m0.313s 00:12:21.341 10:17:28 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.341 10:17:28 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:12:21.341 ************************************ 00:12:21.341 END TEST nvme_perf 00:12:21.341 ************************************ 00:12:21.341 10:17:28 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:21.341 10:17:28 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.341 10:17:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.341 10:17:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.341 ************************************ 00:12:21.341 START TEST nvme_hello_world 00:12:21.341 ************************************ 00:12:21.341 10:17:28 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:21.620 Initializing NVMe Controllers 00:12:21.620 Attached to 0000:00:10.0 00:12:21.620 Namespace ID: 1 size: 6GB 00:12:21.620 Attached to 0000:00:11.0 00:12:21.620 Namespace ID: 1 size: 5GB 00:12:21.620 Attached to 0000:00:13.0 00:12:21.620 Namespace ID: 1 size: 1GB 00:12:21.620 Attached to 0000:00:12.0 00:12:21.620 Namespace ID: 1 size: 4GB 00:12:21.620 Namespace ID: 2 size: 4GB 00:12:21.620 Namespace ID: 3 size: 4GB 00:12:21.620 Initialization complete. 00:12:21.620 INFO: using host memory buffer for IO 00:12:21.620 Hello world! 00:12:21.620 INFO: using host memory buffer for IO 00:12:21.620 Hello world! 00:12:21.620 INFO: using host memory buffer for IO 00:12:21.620 Hello world! 00:12:21.620 INFO: using host memory buffer for IO 00:12:21.620 Hello world! 00:12:21.620 INFO: using host memory buffer for IO 00:12:21.620 Hello world! 00:12:21.620 INFO: using host memory buffer for IO 00:12:21.620 Hello world! 00:12:21.929 00:12:21.929 real 0m0.321s 00:12:21.929 user 0m0.121s 00:12:21.929 sys 0m0.151s 00:12:21.929 ************************************ 00:12:21.929 END TEST nvme_hello_world 00:12:21.929 ************************************ 00:12:21.929 10:17:28 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.929 10:17:28 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:21.929 10:17:28 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:21.929 10:17:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.929 10:17:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.929 10:17:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.929 ************************************ 00:12:21.929 START TEST nvme_sgl 00:12:21.929 ************************************ 00:12:21.929 10:17:28 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:22.189 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:12:22.189 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:12:22.189 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:12:22.189 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:12:22.189 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:12:22.189 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:12:22.189 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:12:22.189 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:12:22.189 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:12:22.190 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:12:22.190 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:12:22.190 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:12:22.190 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:12:22.190 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:12:22.190 NVMe Readv/Writev Request test 00:12:22.190 Attached to 0000:00:10.0 00:12:22.190 Attached to 0000:00:11.0 00:12:22.190 Attached to 0000:00:13.0 00:12:22.190 Attached to 0000:00:12.0 00:12:22.190 0000:00:10.0: build_io_request_2 test passed 00:12:22.190 0000:00:10.0: build_io_request_4 test passed 00:12:22.190 0000:00:10.0: build_io_request_5 test passed 00:12:22.190 0000:00:10.0: build_io_request_6 test passed 00:12:22.190 0000:00:10.0: build_io_request_7 test passed 00:12:22.190 0000:00:10.0: build_io_request_10 test passed 00:12:22.190 0000:00:11.0: build_io_request_2 test passed 00:12:22.190 0000:00:11.0: build_io_request_4 test passed 00:12:22.190 0000:00:11.0: build_io_request_5 test passed 00:12:22.190 0000:00:11.0: build_io_request_6 test passed 00:12:22.190 0000:00:11.0: build_io_request_7 test passed 00:12:22.190 0000:00:11.0: build_io_request_10 test passed 00:12:22.190 Cleaning up... 00:12:22.190 00:12:22.190 real 0m0.366s 00:12:22.190 user 0m0.177s 00:12:22.190 sys 0m0.147s 00:12:22.190 10:17:29 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.190 10:17:29 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:12:22.190 ************************************ 00:12:22.190 END TEST nvme_sgl 00:12:22.190 ************************************ 00:12:22.190 10:17:29 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:22.190 10:17:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.190 10:17:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.190 10:17:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.190 ************************************ 00:12:22.190 START TEST nvme_e2edp 00:12:22.190 ************************************ 00:12:22.190 10:17:29 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:22.449 NVMe Write/Read with End-to-End data protection test 00:12:22.449 Attached to 0000:00:10.0 00:12:22.449 Attached to 0000:00:11.0 00:12:22.449 Attached to 0000:00:13.0 00:12:22.449 Attached to 0000:00:12.0 00:12:22.449 Cleaning up... 00:12:22.708 00:12:22.708 real 0m0.299s 00:12:22.708 user 0m0.097s 00:12:22.708 sys 0m0.156s 00:12:22.708 10:17:29 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.708 10:17:29 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:12:22.708 ************************************ 00:12:22.708 END TEST nvme_e2edp 00:12:22.709 ************************************ 00:12:22.709 10:17:29 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:22.709 10:17:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.709 10:17:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.709 10:17:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.709 ************************************ 00:12:22.709 START TEST nvme_reserve 00:12:22.709 ************************************ 00:12:22.709 10:17:29 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:22.968 ===================================================== 00:12:22.968 NVMe Controller at PCI bus 0, device 16, function 0 00:12:22.968 ===================================================== 00:12:22.968 Reservations: Not Supported 00:12:22.968 ===================================================== 00:12:22.968 NVMe Controller at PCI bus 0, device 17, function 0 00:12:22.968 ===================================================== 00:12:22.968 Reservations: Not Supported 00:12:22.968 ===================================================== 00:12:22.968 NVMe Controller at PCI bus 0, device 19, function 0 00:12:22.968 ===================================================== 00:12:22.968 Reservations: Not Supported 00:12:22.968 ===================================================== 00:12:22.968 NVMe Controller at PCI bus 0, device 18, function 0 00:12:22.968 ===================================================== 00:12:22.968 Reservations: Not Supported 00:12:22.968 Reservation test passed 00:12:22.968 00:12:22.968 real 0m0.293s 00:12:22.968 user 0m0.092s 00:12:22.968 sys 0m0.160s 00:12:22.968 ************************************ 00:12:22.968 END TEST nvme_reserve 00:12:22.968 ************************************ 00:12:22.968 10:17:29 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.968 10:17:29 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:12:22.968 10:17:29 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:22.968 10:17:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:22.968 10:17:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.968 10:17:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.968 ************************************ 00:12:22.968 START TEST nvme_err_injection 00:12:22.968 ************************************ 00:12:22.968 10:17:30 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:23.228 NVMe Error Injection test 00:12:23.228 Attached to 0000:00:10.0 00:12:23.228 Attached to 0000:00:11.0 00:12:23.228 Attached to 0000:00:13.0 00:12:23.228 Attached to 0000:00:12.0 00:12:23.228 0000:00:13.0: get features failed as expected 00:12:23.228 0000:00:12.0: get features failed as expected 00:12:23.228 0000:00:10.0: get features failed as expected 00:12:23.228 0000:00:11.0: get features failed as expected 00:12:23.228 0000:00:10.0: get features successfully as expected 00:12:23.228 0000:00:11.0: get features successfully as expected 00:12:23.228 0000:00:13.0: get features successfully as expected 00:12:23.228 0000:00:12.0: get features successfully as expected 00:12:23.228 0000:00:10.0: read failed as expected 00:12:23.228 0000:00:11.0: read failed as expected 00:12:23.228 0000:00:13.0: read failed as expected 00:12:23.228 0000:00:12.0: read failed as expected 00:12:23.228 0000:00:10.0: read successfully as expected 00:12:23.228 0000:00:11.0: read successfully as expected 00:12:23.228 0000:00:13.0: read successfully as expected 00:12:23.228 0000:00:12.0: read successfully as expected 00:12:23.228 Cleaning up... 00:12:23.228 ************************************ 00:12:23.228 END TEST nvme_err_injection 00:12:23.228 ************************************ 00:12:23.228 00:12:23.228 real 0m0.307s 00:12:23.228 user 0m0.112s 00:12:23.228 sys 0m0.146s 00:12:23.228 10:17:30 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.228 10:17:30 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:12:23.487 10:17:30 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:23.487 10:17:30 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:23.487 10:17:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.487 10:17:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.487 ************************************ 00:12:23.487 START TEST nvme_overhead 00:12:23.487 ************************************ 00:12:23.487 10:17:30 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:24.866 Initializing NVMe Controllers 00:12:24.866 Attached to 0000:00:10.0 00:12:24.866 Attached to 0000:00:11.0 00:12:24.866 Attached to 0000:00:13.0 00:12:24.866 Attached to 0000:00:12.0 00:12:24.866 Initialization complete. Launching workers. 00:12:24.866 submit (in ns) avg, min, max = 13669.4, 11765.5, 125604.0 00:12:24.866 complete (in ns) avg, min, max = 8554.0, 7812.0, 68966.3 00:12:24.866 00:12:24.866 Submit histogram 00:12:24.866 ================ 00:12:24.866 Range in us Cumulative Count 00:12:24.866 11.720 - 11.772: 0.0150% ( 1) 00:12:24.866 12.132 - 12.183: 0.0300% ( 1) 00:12:24.866 12.235 - 12.286: 0.0450% ( 1) 00:12:24.866 12.286 - 12.337: 0.1351% ( 6) 00:12:24.866 12.337 - 12.389: 0.2101% ( 5) 00:12:24.866 12.389 - 12.440: 0.2251% ( 1) 00:12:24.866 12.440 - 12.492: 0.3301% ( 7) 00:12:24.866 12.492 - 12.543: 0.5402% ( 14) 00:12:24.866 12.543 - 12.594: 0.8253% ( 19) 00:12:24.866 12.594 - 12.646: 1.3055% ( 32) 00:12:24.866 12.646 - 12.697: 2.1759% ( 58) 00:12:24.866 12.697 - 12.749: 3.2413% ( 71) 00:12:24.866 12.749 - 12.800: 4.3968% ( 77) 00:12:24.866 12.800 - 12.851: 6.1375% ( 116) 00:12:24.866 12.851 - 12.903: 8.8235% ( 179) 00:12:24.866 12.903 - 12.954: 11.8848% ( 204) 00:12:24.866 12.954 - 13.006: 15.9364% ( 270) 00:12:24.866 13.006 - 13.057: 20.7683% ( 322) 00:12:24.866 13.057 - 13.108: 25.7953% ( 335) 00:12:24.866 13.108 - 13.160: 31.7677% ( 398) 00:12:24.866 13.160 - 13.263: 43.8025% ( 802) 00:12:24.866 13.263 - 13.365: 54.2767% ( 698) 00:12:24.866 13.365 - 13.468: 65.7563% ( 765) 00:12:24.866 13.468 - 13.571: 74.6248% ( 591) 00:12:24.866 13.571 - 13.674: 81.3475% ( 448) 00:12:24.866 13.674 - 13.777: 86.6747% ( 355) 00:12:24.866 13.777 - 13.880: 90.3962% ( 248) 00:12:24.866 13.880 - 13.982: 91.9568% ( 104) 00:12:24.866 13.982 - 14.085: 93.0672% ( 74) 00:12:24.866 14.085 - 14.188: 93.8175% ( 50) 00:12:24.866 14.188 - 14.291: 94.1176% ( 20) 00:12:24.866 14.291 - 14.394: 94.2527% ( 9) 00:12:24.866 14.394 - 14.496: 94.3427% ( 6) 00:12:24.866 14.496 - 14.599: 94.4028% ( 4) 00:12:24.866 14.599 - 14.702: 94.5078% ( 7) 00:12:24.866 14.702 - 14.805: 94.5378% ( 2) 00:12:24.866 14.805 - 14.908: 94.5828% ( 3) 00:12:24.866 14.908 - 15.010: 94.5978% ( 1) 00:12:24.866 15.113 - 15.216: 94.6128% ( 1) 00:12:24.866 15.216 - 15.319: 94.6429% ( 2) 00:12:24.866 15.422 - 15.524: 94.6579% ( 1) 00:12:24.866 15.524 - 15.627: 94.6729% ( 1) 00:12:24.866 15.627 - 15.730: 94.6879% ( 1) 00:12:24.866 15.730 - 15.833: 94.7029% ( 1) 00:12:24.866 15.833 - 15.936: 94.7179% ( 1) 00:12:24.866 15.936 - 16.039: 94.7329% ( 1) 00:12:24.866 16.244 - 16.347: 94.7479% ( 1) 00:12:24.866 16.347 - 16.450: 94.8079% ( 4) 00:12:24.866 16.553 - 16.655: 94.8379% ( 2) 00:12:24.866 16.655 - 16.758: 94.8529% ( 1) 00:12:24.866 16.964 - 17.067: 94.9130% ( 4) 00:12:24.866 17.067 - 17.169: 94.9280% ( 1) 00:12:24.866 17.169 - 17.272: 95.1230% ( 13) 00:12:24.866 17.272 - 17.375: 95.2431% ( 8) 00:12:24.866 17.375 - 17.478: 95.4082% ( 11) 00:12:24.866 17.478 - 17.581: 95.5882% ( 12) 00:12:24.866 17.581 - 17.684: 95.8433% ( 17) 00:12:24.866 17.684 - 17.786: 96.0534% ( 14) 00:12:24.866 17.786 - 17.889: 96.2485% ( 13) 00:12:24.866 17.889 - 17.992: 96.4286% ( 12) 00:12:24.866 17.992 - 18.095: 96.5486% ( 8) 00:12:24.866 18.095 - 18.198: 96.7737% ( 15) 00:12:24.866 18.198 - 18.300: 96.9388% ( 11) 00:12:24.866 18.300 - 18.403: 97.1339% ( 13) 00:12:24.866 18.403 - 18.506: 97.3139% ( 12) 00:12:24.866 18.506 - 18.609: 97.4640% ( 10) 00:12:24.866 18.609 - 18.712: 97.5840% ( 8) 00:12:24.866 18.712 - 18.814: 97.6441% ( 4) 00:12:24.866 18.814 - 18.917: 97.7341% ( 6) 00:12:24.866 18.917 - 19.020: 97.8691% ( 9) 00:12:24.866 19.020 - 19.123: 98.0492% ( 12) 00:12:24.866 19.123 - 19.226: 98.2293% ( 12) 00:12:24.866 19.226 - 19.329: 98.2743% ( 3) 00:12:24.866 19.329 - 19.431: 98.3643% ( 6) 00:12:24.866 19.431 - 19.534: 98.4694% ( 7) 00:12:24.866 19.534 - 19.637: 98.4844% ( 1) 00:12:24.866 19.637 - 19.740: 98.5294% ( 3) 00:12:24.866 19.740 - 19.843: 98.5894% ( 4) 00:12:24.866 19.843 - 19.945: 98.6345% ( 3) 00:12:24.866 19.945 - 20.048: 98.6945% ( 4) 00:12:24.866 20.048 - 20.151: 98.7545% ( 4) 00:12:24.866 20.151 - 20.254: 98.7695% ( 1) 00:12:24.866 20.254 - 20.357: 98.8145% ( 3) 00:12:24.866 20.357 - 20.459: 98.8595% ( 3) 00:12:24.866 20.459 - 20.562: 98.8896% ( 2) 00:12:24.866 20.562 - 20.665: 98.9346% ( 3) 00:12:24.866 20.665 - 20.768: 98.9496% ( 1) 00:12:24.866 20.768 - 20.871: 98.9646% ( 1) 00:12:24.866 20.871 - 20.973: 99.0096% ( 3) 00:12:24.866 20.973 - 21.076: 99.0246% ( 1) 00:12:24.866 21.076 - 21.179: 99.0396% ( 1) 00:12:24.866 21.179 - 21.282: 99.0696% ( 2) 00:12:24.866 21.282 - 21.385: 99.1146% ( 3) 00:12:24.866 21.385 - 21.488: 99.1747% ( 4) 00:12:24.866 21.488 - 21.590: 99.2347% ( 4) 00:12:24.866 21.590 - 21.693: 99.2647% ( 2) 00:12:24.866 21.693 - 21.796: 99.2797% ( 1) 00:12:24.866 21.796 - 21.899: 99.2947% ( 1) 00:12:24.866 21.899 - 22.002: 99.3097% ( 1) 00:12:24.866 22.002 - 22.104: 99.3247% ( 1) 00:12:24.866 22.207 - 22.310: 99.3998% ( 5) 00:12:24.866 22.310 - 22.413: 99.4148% ( 1) 00:12:24.866 22.413 - 22.516: 99.4598% ( 3) 00:12:24.866 22.516 - 22.618: 99.4748% ( 1) 00:12:24.866 22.618 - 22.721: 99.4898% ( 1) 00:12:24.866 22.927 - 23.030: 99.5198% ( 2) 00:12:24.866 23.133 - 23.235: 99.5348% ( 1) 00:12:24.866 23.235 - 23.338: 99.5798% ( 3) 00:12:24.866 23.647 - 23.749: 99.5948% ( 1) 00:12:24.866 23.749 - 23.852: 99.6248% ( 2) 00:12:24.866 23.955 - 24.058: 99.6399% ( 1) 00:12:24.866 24.058 - 24.161: 99.6549% ( 1) 00:12:24.866 24.161 - 24.263: 99.6849% ( 2) 00:12:24.866 25.189 - 25.292: 99.6999% ( 1) 00:12:24.866 25.497 - 25.600: 99.7149% ( 1) 00:12:24.866 25.703 - 25.806: 99.7449% ( 2) 00:12:24.866 25.908 - 26.011: 99.7599% ( 1) 00:12:24.866 26.114 - 26.217: 99.7749% ( 1) 00:12:24.866 26.217 - 26.320: 99.7899% ( 1) 00:12:24.866 26.731 - 26.937: 99.8049% ( 1) 00:12:24.866 27.142 - 27.348: 99.8199% ( 1) 00:12:24.866 27.553 - 27.759: 99.8349% ( 1) 00:12:24.866 29.815 - 30.021: 99.8499% ( 1) 00:12:24.866 30.227 - 30.432: 99.8649% ( 1) 00:12:24.867 31.460 - 31.666: 99.8800% ( 1) 00:12:24.867 31.666 - 31.871: 99.8950% ( 1) 00:12:24.867 32.077 - 32.283: 99.9100% ( 1) 00:12:24.867 33.311 - 33.516: 99.9250% ( 1) 00:12:24.867 33.722 - 33.928: 99.9400% ( 1) 00:12:24.867 34.545 - 34.750: 99.9550% ( 1) 00:12:24.867 42.769 - 42.975: 99.9700% ( 1) 00:12:24.867 81.015 - 81.427: 99.9850% ( 1) 00:12:24.867 125.018 - 125.841: 100.0000% ( 1) 00:12:24.867 00:12:24.867 Complete histogram 00:12:24.867 ================== 00:12:24.867 Range in us Cumulative Count 00:12:24.867 7.762 - 7.814: 0.0150% ( 1) 00:12:24.867 7.814 - 7.865: 0.4202% ( 27) 00:12:24.867 7.865 - 7.916: 3.7215% ( 220) 00:12:24.867 7.916 - 7.968: 13.8055% ( 672) 00:12:24.867 7.968 - 8.019: 29.9220% ( 1074) 00:12:24.867 8.019 - 8.071: 45.7383% ( 1054) 00:12:24.867 8.071 - 8.122: 56.3475% ( 707) 00:12:24.867 8.122 - 8.173: 63.4004% ( 470) 00:12:24.867 8.173 - 8.225: 68.4574% ( 337) 00:12:24.867 8.225 - 8.276: 71.6537% ( 213) 00:12:24.867 8.276 - 8.328: 73.7545% ( 140) 00:12:24.867 8.328 - 8.379: 75.1200% ( 91) 00:12:24.867 8.379 - 8.431: 75.8553% ( 49) 00:12:24.867 8.431 - 8.482: 76.5006% ( 43) 00:12:24.867 8.482 - 8.533: 76.8457% ( 23) 00:12:24.867 8.533 - 8.585: 77.3709% ( 35) 00:12:24.867 8.585 - 8.636: 77.9862% ( 41) 00:12:24.867 8.636 - 8.688: 78.9166% ( 62) 00:12:24.867 8.688 - 8.739: 79.8920% ( 65) 00:12:24.867 8.739 - 8.790: 80.7323% ( 56) 00:12:24.867 8.790 - 8.842: 82.0378% ( 87) 00:12:24.867 8.842 - 8.893: 83.5534% ( 101) 00:12:24.867 8.893 - 8.945: 84.9640% ( 94) 00:12:24.867 8.945 - 8.996: 86.2545% ( 86) 00:12:24.867 8.996 - 9.047: 87.4850% ( 82) 00:12:24.867 9.047 - 9.099: 88.8355% ( 90) 00:12:24.867 9.099 - 9.150: 89.7209% ( 59) 00:12:24.867 9.150 - 9.202: 90.7713% ( 70) 00:12:24.867 9.202 - 9.253: 91.4916% ( 48) 00:12:24.867 9.253 - 9.304: 92.2569% ( 51) 00:12:24.867 9.304 - 9.356: 92.8271% ( 38) 00:12:24.867 9.356 - 9.407: 93.5324% ( 47) 00:12:24.867 9.407 - 9.459: 94.0876% ( 37) 00:12:24.867 9.459 - 9.510: 94.5228% ( 29) 00:12:24.867 9.510 - 9.561: 94.9880% ( 31) 00:12:24.867 9.561 - 9.613: 95.3631% ( 25) 00:12:24.867 9.613 - 9.664: 95.5432% ( 12) 00:12:24.867 9.664 - 9.716: 95.8583% ( 21) 00:12:24.867 9.716 - 9.767: 96.1585% ( 20) 00:12:24.867 9.767 - 9.818: 96.2785% ( 8) 00:12:24.867 9.818 - 9.870: 96.4286% ( 10) 00:12:24.867 9.870 - 9.921: 96.5036% ( 5) 00:12:24.867 9.921 - 9.973: 96.5336% ( 2) 00:12:24.867 9.973 - 10.024: 96.5486% ( 1) 00:12:24.867 10.024 - 10.076: 96.6236% ( 5) 00:12:24.867 10.076 - 10.127: 96.6687% ( 3) 00:12:24.867 10.178 - 10.230: 96.7137% ( 3) 00:12:24.867 10.281 - 10.333: 96.7287% ( 1) 00:12:24.867 10.487 - 10.538: 96.7587% ( 2) 00:12:24.867 10.538 - 10.590: 96.7887% ( 2) 00:12:24.867 10.692 - 10.744: 96.8037% ( 1) 00:12:24.867 10.744 - 10.795: 96.8487% ( 3) 00:12:24.867 10.795 - 10.847: 96.8788% ( 2) 00:12:24.867 10.898 - 10.949: 96.8938% ( 1) 00:12:24.867 10.949 - 11.001: 96.9088% ( 1) 00:12:24.867 11.001 - 11.052: 96.9238% ( 1) 00:12:24.867 11.104 - 11.155: 96.9388% ( 1) 00:12:24.867 11.155 - 11.206: 96.9538% ( 1) 00:12:24.867 11.258 - 11.309: 96.9688% ( 1) 00:12:24.867 11.361 - 11.412: 96.9838% ( 1) 00:12:24.867 11.515 - 11.566: 96.9988% ( 1) 00:12:24.867 11.566 - 11.618: 97.0138% ( 1) 00:12:24.867 11.669 - 11.720: 97.0438% ( 2) 00:12:24.867 11.875 - 11.926: 97.0738% ( 2) 00:12:24.867 12.029 - 12.080: 97.1038% ( 2) 00:12:24.867 12.183 - 12.235: 97.1188% ( 1) 00:12:24.867 12.286 - 12.337: 97.1339% ( 1) 00:12:24.867 12.337 - 12.389: 97.1639% ( 2) 00:12:24.867 12.492 - 12.543: 97.1789% ( 1) 00:12:24.867 12.697 - 12.749: 97.1939% ( 1) 00:12:24.867 12.800 - 12.851: 97.2089% ( 1) 00:12:24.867 13.006 - 13.057: 97.2239% ( 1) 00:12:24.867 13.057 - 13.108: 97.2389% ( 1) 00:12:24.867 13.108 - 13.160: 97.2839% ( 3) 00:12:24.867 13.160 - 13.263: 97.3890% ( 7) 00:12:24.867 13.263 - 13.365: 97.5240% ( 9) 00:12:24.867 13.365 - 13.468: 97.6140% ( 6) 00:12:24.867 13.468 - 13.571: 97.8241% ( 14) 00:12:24.867 13.571 - 13.674: 97.9592% ( 9) 00:12:24.867 13.674 - 13.777: 98.0342% ( 5) 00:12:24.867 13.777 - 13.880: 98.1843% ( 10) 00:12:24.867 13.880 - 13.982: 98.2293% ( 3) 00:12:24.867 13.982 - 14.085: 98.3043% ( 5) 00:12:24.867 14.085 - 14.188: 98.3643% ( 4) 00:12:24.867 14.188 - 14.291: 98.4094% ( 3) 00:12:24.867 14.291 - 14.394: 98.4244% ( 1) 00:12:24.867 14.394 - 14.496: 98.4544% ( 2) 00:12:24.867 14.599 - 14.702: 98.4994% ( 3) 00:12:24.867 14.702 - 14.805: 98.5444% ( 3) 00:12:24.867 14.805 - 14.908: 98.5594% ( 1) 00:12:24.867 15.010 - 15.113: 98.5894% ( 2) 00:12:24.867 15.319 - 15.422: 98.6345% ( 3) 00:12:24.867 15.422 - 15.524: 98.6495% ( 1) 00:12:24.867 15.524 - 15.627: 98.6645% ( 1) 00:12:24.867 15.627 - 15.730: 98.6945% ( 2) 00:12:24.867 15.730 - 15.833: 98.7245% ( 2) 00:12:24.867 15.833 - 15.936: 98.7395% ( 1) 00:12:24.867 15.936 - 16.039: 98.7545% ( 1) 00:12:24.867 16.758 - 16.861: 98.7695% ( 1) 00:12:24.867 16.861 - 16.964: 98.7845% ( 1) 00:12:24.867 17.067 - 17.169: 98.7995% ( 1) 00:12:24.867 17.169 - 17.272: 98.8145% ( 1) 00:12:24.867 17.375 - 17.478: 98.8445% ( 2) 00:12:24.867 17.478 - 17.581: 98.8745% ( 2) 00:12:24.867 17.581 - 17.684: 98.8896% ( 1) 00:12:24.867 17.786 - 17.889: 98.9046% ( 1) 00:12:24.867 17.889 - 17.992: 98.9346% ( 2) 00:12:24.867 17.992 - 18.095: 98.9646% ( 2) 00:12:24.867 18.095 - 18.198: 98.9796% ( 1) 00:12:24.867 18.198 - 18.300: 98.9946% ( 1) 00:12:24.867 18.403 - 18.506: 99.0096% ( 1) 00:12:24.867 18.506 - 18.609: 99.0546% ( 3) 00:12:24.867 18.712 - 18.814: 99.0846% ( 2) 00:12:24.867 18.814 - 18.917: 99.0996% ( 1) 00:12:24.867 18.917 - 19.020: 99.1297% ( 2) 00:12:24.867 19.020 - 19.123: 99.1447% ( 1) 00:12:24.867 19.123 - 19.226: 99.1897% ( 3) 00:12:24.867 19.226 - 19.329: 99.2197% ( 2) 00:12:24.867 19.329 - 19.431: 99.2647% ( 3) 00:12:24.867 19.431 - 19.534: 99.2947% ( 2) 00:12:24.867 19.534 - 19.637: 99.3247% ( 2) 00:12:24.867 19.637 - 19.740: 99.3697% ( 3) 00:12:24.867 19.740 - 19.843: 99.3998% ( 2) 00:12:24.867 19.843 - 19.945: 99.4448% ( 3) 00:12:24.867 19.945 - 20.048: 99.4748% ( 2) 00:12:24.867 20.048 - 20.151: 99.5048% ( 2) 00:12:24.867 20.151 - 20.254: 99.5348% ( 2) 00:12:24.867 20.357 - 20.459: 99.5498% ( 1) 00:12:24.867 20.459 - 20.562: 99.5798% ( 2) 00:12:24.867 20.562 - 20.665: 99.5948% ( 1) 00:12:24.867 20.665 - 20.768: 99.6098% ( 1) 00:12:24.867 20.973 - 21.076: 99.6248% ( 1) 00:12:24.867 21.076 - 21.179: 99.6699% ( 3) 00:12:24.867 21.282 - 21.385: 99.6849% ( 1) 00:12:24.867 21.385 - 21.488: 99.7149% ( 2) 00:12:24.867 21.590 - 21.693: 99.7449% ( 2) 00:12:24.867 22.104 - 22.207: 99.7749% ( 2) 00:12:24.867 22.413 - 22.516: 99.7899% ( 1) 00:12:24.867 22.516 - 22.618: 99.8049% ( 1) 00:12:24.867 22.618 - 22.721: 99.8349% ( 2) 00:12:24.867 23.235 - 23.338: 99.8499% ( 1) 00:12:24.867 23.441 - 23.544: 99.8649% ( 1) 00:12:24.867 23.852 - 23.955: 99.8800% ( 1) 00:12:24.867 31.255 - 31.460: 99.8950% ( 1) 00:12:24.867 37.835 - 38.040: 99.9100% ( 1) 00:12:24.867 38.451 - 38.657: 99.9250% ( 1) 00:12:24.867 44.620 - 44.826: 99.9400% ( 1) 00:12:24.867 48.116 - 48.321: 99.9550% ( 1) 00:12:24.867 50.172 - 50.378: 99.9700% ( 1) 00:12:24.867 57.986 - 58.397: 99.9850% ( 1) 00:12:24.867 68.678 - 69.089: 100.0000% ( 1) 00:12:24.867 00:12:24.867 00:12:24.867 real 0m1.313s 00:12:24.867 user 0m1.111s 00:12:24.867 sys 0m0.152s 00:12:24.867 10:17:31 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.867 10:17:31 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:12:24.867 ************************************ 00:12:24.867 END TEST nvme_overhead 00:12:24.867 ************************************ 00:12:24.867 10:17:31 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:24.867 10:17:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:24.867 10:17:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.867 10:17:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.867 ************************************ 00:12:24.867 START TEST nvme_arbitration 00:12:24.867 ************************************ 00:12:24.867 10:17:31 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:28.169 Initializing NVMe Controllers 00:12:28.169 Attached to 0000:00:10.0 00:12:28.169 Attached to 0000:00:11.0 00:12:28.169 Attached to 0000:00:13.0 00:12:28.169 Attached to 0000:00:12.0 00:12:28.169 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:12:28.169 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:12:28.169 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:12:28.169 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:28.169 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:28.169 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:28.169 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:28.169 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:28.169 Initialization complete. Launching workers. 00:12:28.169 Starting thread on core 1 with urgent priority queue 00:12:28.169 Starting thread on core 2 with urgent priority queue 00:12:28.169 Starting thread on core 3 with urgent priority queue 00:12:28.169 Starting thread on core 0 with urgent priority queue 00:12:28.169 QEMU NVMe Ctrl (12340 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:12:28.169 QEMU NVMe Ctrl (12342 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:12:28.169 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:12:28.169 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:12:28.169 QEMU NVMe Ctrl (12343 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:12:28.169 QEMU NVMe Ctrl (12342 ) core 3: 554.67 IO/s 180.29 secs/100000 ios 00:12:28.169 ======================================================== 00:12:28.169 00:12:28.169 00:12:28.169 real 0m3.454s 00:12:28.169 user 0m9.465s 00:12:28.169 sys 0m0.160s 00:12:28.169 10:17:35 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.169 10:17:35 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:12:28.169 ************************************ 00:12:28.169 END TEST nvme_arbitration 00:12:28.169 ************************************ 00:12:28.169 10:17:35 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:28.169 10:17:35 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:28.169 10:17:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.169 10:17:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.430 ************************************ 00:12:28.430 START TEST nvme_single_aen 00:12:28.430 ************************************ 00:12:28.430 10:17:35 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:28.689 Asynchronous Event Request test 00:12:28.689 Attached to 0000:00:10.0 00:12:28.689 Attached to 0000:00:11.0 00:12:28.689 Attached to 0000:00:13.0 00:12:28.689 Attached to 0000:00:12.0 00:12:28.689 Reset controller to setup AER completions for this process 00:12:28.689 Registering asynchronous event callbacks... 00:12:28.689 Getting orig temperature thresholds of all controllers 00:12:28.690 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:28.690 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:28.690 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:28.690 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:28.690 Setting all controllers temperature threshold low to trigger AER 00:12:28.690 Waiting for all controllers temperature threshold to be set lower 00:12:28.690 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:28.690 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:28.690 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:28.690 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:28.690 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:28.690 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:28.690 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:28.690 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:28.690 Waiting for all controllers to trigger AER and reset threshold 00:12:28.690 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.690 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.690 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.690 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:28.690 Cleaning up... 00:12:28.690 ************************************ 00:12:28.690 END TEST nvme_single_aen 00:12:28.690 ************************************ 00:12:28.690 00:12:28.690 real 0m0.328s 00:12:28.690 user 0m0.118s 00:12:28.690 sys 0m0.164s 00:12:28.690 10:17:35 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.690 10:17:35 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:12:28.690 10:17:35 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:28.690 10:17:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.690 10:17:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.690 10:17:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.690 ************************************ 00:12:28.690 START TEST nvme_doorbell_aers 00:12:28.690 ************************************ 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:28.690 10:17:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:29.256 [2024-11-25 10:17:36.107597] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:12:39.249 Executing: test_write_invalid_db 00:12:39.249 Waiting for AER completion... 00:12:39.249 Failure: test_write_invalid_db 00:12:39.249 00:12:39.249 Executing: test_invalid_db_write_overflow_sq 00:12:39.250 Waiting for AER completion... 00:12:39.250 Failure: test_invalid_db_write_overflow_sq 00:12:39.250 00:12:39.250 Executing: test_invalid_db_write_overflow_cq 00:12:39.250 Waiting for AER completion... 00:12:39.250 Failure: test_invalid_db_write_overflow_cq 00:12:39.250 00:12:39.250 10:17:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:39.250 10:17:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:39.250 [2024-11-25 10:17:46.160206] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:12:49.223 Executing: test_write_invalid_db 00:12:49.223 Waiting for AER completion... 00:12:49.223 Failure: test_write_invalid_db 00:12:49.223 00:12:49.223 Executing: test_invalid_db_write_overflow_sq 00:12:49.223 Waiting for AER completion... 00:12:49.223 Failure: test_invalid_db_write_overflow_sq 00:12:49.223 00:12:49.223 Executing: test_invalid_db_write_overflow_cq 00:12:49.223 Waiting for AER completion... 00:12:49.223 Failure: test_invalid_db_write_overflow_cq 00:12:49.223 00:12:49.223 10:17:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:49.223 10:17:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:49.223 [2024-11-25 10:17:56.196077] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:12:59.204 Executing: test_write_invalid_db 00:12:59.204 Waiting for AER completion... 00:12:59.204 Failure: test_write_invalid_db 00:12:59.204 00:12:59.204 Executing: test_invalid_db_write_overflow_sq 00:12:59.204 Waiting for AER completion... 00:12:59.204 Failure: test_invalid_db_write_overflow_sq 00:12:59.204 00:12:59.204 Executing: test_invalid_db_write_overflow_cq 00:12:59.204 Waiting for AER completion... 00:12:59.204 Failure: test_invalid_db_write_overflow_cq 00:12:59.204 00:12:59.204 10:18:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:59.204 10:18:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:59.204 [2024-11-25 10:18:06.274355] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.182 Executing: test_write_invalid_db 00:13:09.182 Waiting for AER completion... 00:13:09.182 Failure: test_write_invalid_db 00:13:09.182 00:13:09.182 Executing: test_invalid_db_write_overflow_sq 00:13:09.182 Waiting for AER completion... 00:13:09.182 Failure: test_invalid_db_write_overflow_sq 00:13:09.182 00:13:09.182 Executing: test_invalid_db_write_overflow_cq 00:13:09.182 Waiting for AER completion... 00:13:09.182 Failure: test_invalid_db_write_overflow_cq 00:13:09.182 00:13:09.182 ************************************ 00:13:09.182 END TEST nvme_doorbell_aers 00:13:09.182 ************************************ 00:13:09.182 00:13:09.182 real 0m40.331s 00:13:09.183 user 0m28.518s 00:13:09.183 sys 0m11.436s 00:13:09.183 10:18:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.183 10:18:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:13:09.183 10:18:16 nvme -- nvme/nvme.sh@97 -- # uname 00:13:09.183 10:18:16 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:13:09.183 10:18:16 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:09.183 10:18:16 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:09.183 10:18:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.183 10:18:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.183 ************************************ 00:13:09.183 START TEST nvme_multi_aen 00:13:09.183 ************************************ 00:13:09.183 10:18:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:09.441 [2024-11-25 10:18:16.371043] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.371298] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.371319] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.372814] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.372846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.372859] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.374279] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.374430] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.374452] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.375863] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.375902] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 [2024-11-25 10:18:16.375916] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64469) is not found. Dropping the request. 00:13:09.441 Child process pid: 64985 00:13:09.700 [Child] Asynchronous Event Request test 00:13:09.700 [Child] Attached to 0000:00:10.0 00:13:09.700 [Child] Attached to 0000:00:11.0 00:13:09.700 [Child] Attached to 0000:00:13.0 00:13:09.700 [Child] Attached to 0000:00:12.0 00:13:09.700 [Child] Registering asynchronous event callbacks... 00:13:09.700 [Child] Getting orig temperature thresholds of all controllers 00:13:09.701 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.701 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.701 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.701 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.701 [Child] Waiting for all controllers to trigger AER and reset threshold 00:13:09.701 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.701 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.701 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.701 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.701 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.701 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.701 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.701 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.701 [Child] Cleaning up... 00:13:09.701 Asynchronous Event Request test 00:13:09.701 Attached to 0000:00:10.0 00:13:09.701 Attached to 0000:00:11.0 00:13:09.701 Attached to 0000:00:13.0 00:13:09.701 Attached to 0000:00:12.0 00:13:09.701 Reset controller to setup AER completions for this process 00:13:09.701 Registering asynchronous event callbacks... 00:13:09.701 Getting orig temperature thresholds of all controllers 00:13:09.701 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.701 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.701 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.701 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:09.701 Setting all controllers temperature threshold low to trigger AER 00:13:09.701 Waiting for all controllers temperature threshold to be set lower 00:13:09.701 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.701 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:09.701 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.701 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:09.701 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.701 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:09.701 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:09.701 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:09.701 Waiting for all controllers to trigger AER and reset threshold 00:13:09.701 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.701 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.701 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.701 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:09.701 Cleaning up... 00:13:09.701 ************************************ 00:13:09.701 END TEST nvme_multi_aen 00:13:09.701 ************************************ 00:13:09.701 00:13:09.701 real 0m0.641s 00:13:09.701 user 0m0.212s 00:13:09.701 sys 0m0.311s 00:13:09.701 10:18:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.701 10:18:16 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:13:09.701 10:18:16 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:09.701 10:18:16 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:09.701 10:18:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.701 10:18:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.701 ************************************ 00:13:09.701 START TEST nvme_startup 00:13:09.701 ************************************ 00:13:09.701 10:18:16 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:09.961 Initializing NVMe Controllers 00:13:09.961 Attached to 0000:00:10.0 00:13:09.961 Attached to 0000:00:11.0 00:13:09.961 Attached to 0000:00:13.0 00:13:09.961 Attached to 0000:00:12.0 00:13:09.961 Initialization complete. 00:13:09.961 Time used:178309.422 (us). 00:13:10.287 00:13:10.287 real 0m0.270s 00:13:10.287 user 0m0.090s 00:13:10.287 sys 0m0.136s 00:13:10.287 ************************************ 00:13:10.287 END TEST nvme_startup 00:13:10.287 ************************************ 00:13:10.287 10:18:17 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.287 10:18:17 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:13:10.287 10:18:17 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:13:10.287 10:18:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.287 10:18:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.287 10:18:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.287 ************************************ 00:13:10.287 START TEST nvme_multi_secondary 00:13:10.287 ************************************ 00:13:10.287 10:18:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:13:10.287 10:18:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65041 00:13:10.287 10:18:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:13:10.287 10:18:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65042 00:13:10.287 10:18:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:10.287 10:18:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:13:13.574 Initializing NVMe Controllers 00:13:13.574 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:13.574 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:13.574 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:13.574 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:13.574 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:13.574 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:13.574 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:13.574 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:13.574 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:13.574 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:13.574 Initialization complete. Launching workers. 00:13:13.574 ======================================================== 00:13:13.574 Latency(us) 00:13:13.574 Device Information : IOPS MiB/s Average min max 00:13:13.574 PCIE (0000:00:10.0) NSID 1 from core 1: 5251.16 20.51 3044.80 965.06 7480.44 00:13:13.574 PCIE (0000:00:11.0) NSID 1 from core 1: 5251.16 20.51 3046.77 997.53 7347.86 00:13:13.574 PCIE (0000:00:13.0) NSID 1 from core 1: 5251.16 20.51 3047.08 997.45 7537.78 00:13:13.574 PCIE (0000:00:12.0) NSID 1 from core 1: 5251.16 20.51 3047.25 1002.26 7475.87 00:13:13.574 PCIE (0000:00:12.0) NSID 2 from core 1: 5251.16 20.51 3047.58 1006.38 7527.78 00:13:13.574 PCIE (0000:00:12.0) NSID 3 from core 1: 5251.16 20.51 3047.99 990.13 7688.17 00:13:13.574 ======================================================== 00:13:13.574 Total : 31506.99 123.07 3046.91 965.06 7688.17 00:13:13.574 00:13:13.833 Initializing NVMe Controllers 00:13:13.833 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:13.833 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:13.833 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:13.833 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:13.833 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:13.833 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:13.833 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:13.833 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:13.833 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:13.833 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:13.833 Initialization complete. Launching workers. 00:13:13.833 ======================================================== 00:13:13.833 Latency(us) 00:13:13.833 Device Information : IOPS MiB/s Average min max 00:13:13.833 PCIE (0000:00:10.0) NSID 1 from core 2: 3108.09 12.14 5145.64 1294.80 16554.76 00:13:13.833 PCIE (0000:00:11.0) NSID 1 from core 2: 3108.09 12.14 5146.92 1311.45 16285.57 00:13:13.833 PCIE (0000:00:13.0) NSID 1 from core 2: 3108.09 12.14 5147.28 1371.28 13801.58 00:13:13.833 PCIE (0000:00:12.0) NSID 1 from core 2: 3108.09 12.14 5148.92 1478.78 16566.64 00:13:13.833 PCIE (0000:00:12.0) NSID 2 from core 2: 3108.09 12.14 5153.94 1393.55 14155.73 00:13:13.833 PCIE (0000:00:12.0) NSID 3 from core 2: 3108.09 12.14 5153.53 1327.95 13616.91 00:13:13.833 ======================================================== 00:13:13.833 Total : 18648.53 72.85 5149.37 1294.80 16566.64 00:13:13.833 00:13:13.833 10:18:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65041 00:13:15.738 Initializing NVMe Controllers 00:13:15.738 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:15.738 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:15.738 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:15.738 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:15.738 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:15.738 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:15.738 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:15.738 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:15.738 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:15.738 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:15.738 Initialization complete. Launching workers. 00:13:15.738 ======================================================== 00:13:15.738 Latency(us) 00:13:15.738 Device Information : IOPS MiB/s Average min max 00:13:15.738 PCIE (0000:00:10.0) NSID 1 from core 0: 8191.82 32.00 1951.59 933.27 8192.68 00:13:15.738 PCIE (0000:00:11.0) NSID 1 from core 0: 8191.82 32.00 1952.68 949.50 7913.86 00:13:15.738 PCIE (0000:00:13.0) NSID 1 from core 0: 8191.82 32.00 1952.64 843.67 7425.15 00:13:15.738 PCIE (0000:00:12.0) NSID 1 from core 0: 8191.82 32.00 1952.60 799.25 7746.56 00:13:15.738 PCIE (0000:00:12.0) NSID 2 from core 0: 8191.82 32.00 1952.57 725.65 8600.71 00:13:15.738 PCIE (0000:00:12.0) NSID 3 from core 0: 8191.82 32.00 1952.54 724.87 8137.97 00:13:15.738 ======================================================== 00:13:15.738 Total : 49150.92 192.00 1952.44 724.87 8600.71 00:13:15.738 00:13:15.738 10:18:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65042 00:13:15.738 10:18:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65111 00:13:15.738 10:18:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:13:15.738 10:18:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:15.738 10:18:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65112 00:13:15.738 10:18:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:13:19.080 Initializing NVMe Controllers 00:13:19.080 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:19.080 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:19.080 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:19.080 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:19.080 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:19.080 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:19.080 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:19.080 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:19.080 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:19.080 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:19.080 Initialization complete. Launching workers. 00:13:19.080 ======================================================== 00:13:19.080 Latency(us) 00:13:19.080 Device Information : IOPS MiB/s Average min max 00:13:19.080 PCIE (0000:00:10.0) NSID 1 from core 0: 5285.18 20.65 3025.02 941.43 7130.11 00:13:19.080 PCIE (0000:00:11.0) NSID 1 from core 0: 5285.18 20.65 3026.81 976.87 7235.65 00:13:19.080 PCIE (0000:00:13.0) NSID 1 from core 0: 5285.18 20.65 3027.22 948.66 6872.06 00:13:19.080 PCIE (0000:00:12.0) NSID 1 from core 0: 5285.18 20.65 3027.27 968.88 6410.24 00:13:19.080 PCIE (0000:00:12.0) NSID 2 from core 0: 5285.18 20.65 3027.56 968.71 6761.05 00:13:19.080 PCIE (0000:00:12.0) NSID 3 from core 0: 5285.18 20.65 3028.09 963.10 6887.66 00:13:19.080 ======================================================== 00:13:19.080 Total : 31711.08 123.87 3027.00 941.43 7235.65 00:13:19.080 00:13:19.080 Initializing NVMe Controllers 00:13:19.080 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:19.080 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:19.080 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:19.080 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:19.080 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:19.080 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:19.080 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:19.080 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:19.080 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:19.080 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:19.080 Initialization complete. Launching workers. 00:13:19.080 ======================================================== 00:13:19.080 Latency(us) 00:13:19.080 Device Information : IOPS MiB/s Average min max 00:13:19.080 PCIE (0000:00:10.0) NSID 1 from core 1: 5154.09 20.13 3101.86 984.08 8106.01 00:13:19.080 PCIE (0000:00:11.0) NSID 1 from core 1: 5154.09 20.13 3103.90 985.67 7563.54 00:13:19.080 PCIE (0000:00:13.0) NSID 1 from core 1: 5154.09 20.13 3104.09 1014.12 7128.58 00:13:19.080 PCIE (0000:00:12.0) NSID 1 from core 1: 5154.09 20.13 3104.20 1026.65 7323.84 00:13:19.080 PCIE (0000:00:12.0) NSID 2 from core 1: 5154.09 20.13 3104.14 1031.75 6935.47 00:13:19.080 PCIE (0000:00:12.0) NSID 3 from core 1: 5154.09 20.13 3104.08 1005.69 8095.95 00:13:19.080 ======================================================== 00:13:19.080 Total : 30924.56 120.80 3103.71 984.08 8106.01 00:13:19.080 00:13:20.982 Initializing NVMe Controllers 00:13:20.982 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:20.982 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:20.982 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:20.982 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:20.982 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:20.982 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:20.982 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:20.982 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:20.982 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:20.982 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:20.982 Initialization complete. Launching workers. 00:13:20.982 ======================================================== 00:13:20.982 Latency(us) 00:13:20.982 Device Information : IOPS MiB/s Average min max 00:13:20.982 PCIE (0000:00:10.0) NSID 1 from core 2: 3225.22 12.60 4958.69 1053.85 13507.58 00:13:20.982 PCIE (0000:00:11.0) NSID 1 from core 2: 3225.22 12.60 4960.38 1075.06 13360.41 00:13:20.982 PCIE (0000:00:13.0) NSID 1 from core 2: 3225.22 12.60 4956.37 1057.57 13702.86 00:13:20.982 PCIE (0000:00:12.0) NSID 1 from core 2: 3225.22 12.60 4956.32 1047.78 13730.76 00:13:20.982 PCIE (0000:00:12.0) NSID 2 from core 2: 3225.22 12.60 4956.52 1031.46 12654.78 00:13:20.982 PCIE (0000:00:12.0) NSID 3 from core 2: 3225.22 12.60 4956.47 1071.79 12593.77 00:13:20.982 ======================================================== 00:13:20.982 Total : 19351.35 75.59 4957.46 1031.46 13730.76 00:13:20.982 00:13:21.242 10:18:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65111 00:13:21.242 10:18:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65112 00:13:21.242 00:13:21.242 real 0m11.041s 00:13:21.242 user 0m18.604s 00:13:21.242 sys 0m1.046s 00:13:21.242 10:18:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.242 ************************************ 00:13:21.242 10:18:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:13:21.242 END TEST nvme_multi_secondary 00:13:21.242 ************************************ 00:13:21.242 10:18:28 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:13:21.242 10:18:28 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:13:21.242 10:18:28 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64049 ]] 00:13:21.242 10:18:28 nvme -- common/autotest_common.sh@1094 -- # kill 64049 00:13:21.242 10:18:28 nvme -- common/autotest_common.sh@1095 -- # wait 64049 00:13:21.242 [2024-11-25 10:18:28.251671] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.251984] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.252046] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.252082] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.256834] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.256916] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.256949] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.257000] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.261403] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.261481] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.261530] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.261565] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.264780] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.264834] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.264855] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.242 [2024-11-25 10:18:28.264877] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64984) is not found. Dropping the request. 00:13:21.502 [2024-11-25 10:18:28.471274] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:13:21.502 10:18:28 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:13:21.502 10:18:28 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:13:21.502 10:18:28 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:21.502 10:18:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:21.502 10:18:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.502 10:18:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.502 ************************************ 00:13:21.502 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:21.502 ************************************ 00:13:21.502 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:21.762 * Looking for test storage... 00:13:21.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:21.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.763 --rc genhtml_branch_coverage=1 00:13:21.763 --rc genhtml_function_coverage=1 00:13:21.763 --rc genhtml_legend=1 00:13:21.763 --rc geninfo_all_blocks=1 00:13:21.763 --rc geninfo_unexecuted_blocks=1 00:13:21.763 00:13:21.763 ' 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:21.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.763 --rc genhtml_branch_coverage=1 00:13:21.763 --rc genhtml_function_coverage=1 00:13:21.763 --rc genhtml_legend=1 00:13:21.763 --rc geninfo_all_blocks=1 00:13:21.763 --rc geninfo_unexecuted_blocks=1 00:13:21.763 00:13:21.763 ' 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:21.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.763 --rc genhtml_branch_coverage=1 00:13:21.763 --rc genhtml_function_coverage=1 00:13:21.763 --rc genhtml_legend=1 00:13:21.763 --rc geninfo_all_blocks=1 00:13:21.763 --rc geninfo_unexecuted_blocks=1 00:13:21.763 00:13:21.763 ' 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:21.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.763 --rc genhtml_branch_coverage=1 00:13:21.763 --rc genhtml_function_coverage=1 00:13:21.763 --rc genhtml_legend=1 00:13:21.763 --rc geninfo_all_blocks=1 00:13:21.763 --rc geninfo_unexecuted_blocks=1 00:13:21.763 00:13:21.763 ' 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65278 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65278 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65278 ']' 00:13:21.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.763 10:18:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:22.022 [2024-11-25 10:18:28.959215] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:13:22.022 [2024-11-25 10:18:28.959351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65278 ] 00:13:22.321 [2024-11-25 10:18:29.160961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.321 [2024-11-25 10:18:29.286738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.321 [2024-11-25 10:18:29.286915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.321 [2024-11-25 10:18:29.287083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.321 [2024-11-25 10:18:29.287118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:23.259 nvme0n1 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_L6ZLy.txt 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.259 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:23.259 true 00:13:23.260 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.260 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:23.260 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732529910 00:13:23.260 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65301 00:13:23.260 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:23.260 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:23.260 10:18:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:25.162 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:25.162 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.162 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:25.162 [2024-11-25 10:18:32.270204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:25.162 [2024-11-25 10:18:32.270639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:25.162 [2024-11-25 10:18:32.270761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:25.162 [2024-11-25 10:18:32.270868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.420 [2024-11-25 10:18:32.272777] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65301 00:13:25.420 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65301 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65301 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_L6ZLy.txt 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:25.420 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_L6ZLy.txt 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65278 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65278 ']' 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65278 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65278 00:13:25.421 killing process with pid 65278 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65278' 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65278 00:13:25.421 10:18:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65278 00:13:27.954 10:18:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:27.954 10:18:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:27.954 00:13:27.954 real 0m6.369s 00:13:27.954 user 0m22.203s 00:13:27.954 sys 0m0.768s 00:13:27.954 ************************************ 00:13:27.954 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:27.954 ************************************ 00:13:27.954 10:18:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.954 10:18:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:27.954 10:18:34 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:27.954 10:18:34 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:27.954 10:18:34 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.954 10:18:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.954 10:18:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.954 ************************************ 00:13:27.954 START TEST nvme_fio 00:13:27.954 ************************************ 00:13:27.954 10:18:34 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:13:27.954 10:18:34 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:27.954 10:18:34 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:27.954 10:18:34 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:27.954 10:18:34 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:27.954 10:18:34 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:13:27.954 10:18:34 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:27.954 10:18:34 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:27.954 10:18:34 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:27.954 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:27.954 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:27.954 10:18:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:13:27.955 10:18:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:27.955 10:18:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:27.955 10:18:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:27.955 10:18:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:28.523 10:18:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:28.523 10:18:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:28.523 10:18:35 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:28.523 10:18:35 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:28.523 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:28.781 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:28.781 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:28.781 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:28.781 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:28.781 10:18:35 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:28.781 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:28.781 fio-3.35 00:13:28.781 Starting 1 thread 00:13:32.100 00:13:32.100 test: (groupid=0, jobs=1): err= 0: pid=65457: Mon Nov 25 10:18:39 2024 00:13:32.100 read: IOPS=20.6k, BW=80.6MiB/s (84.5MB/s)(161MiB/2001msec) 00:13:32.100 slat (usec): min=3, max=820, avg= 5.00, stdev= 4.87 00:13:32.100 clat (usec): min=260, max=12414, avg=3094.01, stdev=737.59 00:13:32.100 lat (usec): min=267, max=12494, avg=3099.01, stdev=738.63 00:13:32.100 clat percentiles (usec): 00:13:32.100 | 1.00th=[ 2040], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2802], 00:13:32.100 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:13:32.100 | 70.00th=[ 2999], 80.00th=[ 3163], 90.00th=[ 3523], 95.00th=[ 4228], 00:13:32.100 | 99.00th=[ 6456], 99.50th=[ 8225], 99.90th=[11207], 99.95th=[11863], 00:13:32.100 | 99.99th=[12387] 00:13:32.100 bw ( KiB/s): min=83816, max=86152, per=100.00%, avg=84760.00, stdev=1230.75, samples=3 00:13:32.100 iops : min=20954, max=21538, avg=21190.00, stdev=307.69, samples=3 00:13:32.100 write: IOPS=20.6k, BW=80.3MiB/s (84.2MB/s)(161MiB/2001msec); 0 zone resets 00:13:32.100 slat (usec): min=3, max=514, avg= 5.16, stdev= 3.03 00:13:32.100 clat (usec): min=233, max=12449, avg=3092.73, stdev=746.16 00:13:32.100 lat (usec): min=240, max=12454, avg=3097.89, stdev=747.09 00:13:32.100 clat percentiles (usec): 00:13:32.100 | 1.00th=[ 2073], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2835], 00:13:32.100 | 30.00th=[ 2868], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:13:32.100 | 70.00th=[ 2999], 80.00th=[ 3130], 90.00th=[ 3490], 95.00th=[ 4146], 00:13:32.100 | 99.00th=[ 6587], 99.50th=[ 8455], 99.90th=[11076], 99.95th=[11731], 00:13:32.100 | 99.99th=[12256] 00:13:32.100 bw ( KiB/s): min=83816, max=86064, per=100.00%, avg=84853.33, stdev=1133.98, samples=3 00:13:32.100 iops : min=20954, max=21516, avg=21213.33, stdev=283.49, samples=3 00:13:32.100 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:32.100 lat (msec) : 2=0.87%, 4=93.35%, 10=5.56%, 20=0.19% 00:13:32.100 cpu : usr=98.65%, sys=0.25%, ctx=22, majf=0, minf=607 00:13:32.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:32.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:32.100 issued rwts: total=41276,41139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:32.100 00:13:32.100 Run status group 0 (all jobs): 00:13:32.100 READ: bw=80.6MiB/s (84.5MB/s), 80.6MiB/s-80.6MiB/s (84.5MB/s-84.5MB/s), io=161MiB (169MB), run=2001-2001msec 00:13:32.100 WRITE: bw=80.3MiB/s (84.2MB/s), 80.3MiB/s-80.3MiB/s (84.2MB/s-84.2MB/s), io=161MiB (169MB), run=2001-2001msec 00:13:32.358 ----------------------------------------------------- 00:13:32.358 Suppressions used: 00:13:32.358 count bytes template 00:13:32.358 1 32 /usr/src/fio/parse.c 00:13:32.358 1 8 libtcmalloc_minimal.so 00:13:32.358 ----------------------------------------------------- 00:13:32.358 00:13:32.358 10:18:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:32.358 10:18:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:32.358 10:18:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:32.358 10:18:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:32.617 10:18:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:32.617 10:18:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:33.186 10:18:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:33.186 10:18:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:33.186 10:18:39 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:33.186 10:18:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:33.186 10:18:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.186 10:18:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:33.186 10:18:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.186 10:18:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:33.186 10:18:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:33.186 10:18:39 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.186 10:18:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.186 10:18:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:33.186 10:18:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.186 10:18:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.186 10:18:40 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.186 10:18:40 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:33.186 10:18:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:33.186 10:18:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:33.186 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:33.186 fio-3.35 00:13:33.186 Starting 1 thread 00:13:37.436 00:13:37.436 test: (groupid=0, jobs=1): err= 0: pid=65523: Mon Nov 25 10:18:43 2024 00:13:37.436 read: IOPS=21.5k, BW=83.9MiB/s (87.9MB/s)(168MiB/2001msec) 00:13:37.436 slat (nsec): min=3964, max=51840, avg=4732.27, stdev=1119.59 00:13:37.436 clat (usec): min=271, max=10078, avg=2975.03, stdev=410.78 00:13:37.436 lat (usec): min=276, max=10130, avg=2979.76, stdev=411.37 00:13:37.436 clat percentiles (usec): 00:13:37.436 | 1.00th=[ 2671], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:13:37.436 | 30.00th=[ 2868], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:13:37.436 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3687], 00:13:37.436 | 99.00th=[ 4228], 99.50th=[ 5669], 99.90th=[ 8356], 99.95th=[ 8586], 00:13:37.436 | 99.99th=[ 9896] 00:13:37.436 bw ( KiB/s): min=82016, max=86248, per=98.53%, avg=84608.00, stdev=2270.94, samples=3 00:13:37.436 iops : min=20506, max=21562, avg=21152.67, stdev=566.59, samples=3 00:13:37.436 write: IOPS=21.3k, BW=83.2MiB/s (87.2MB/s)(166MiB/2001msec); 0 zone resets 00:13:37.436 slat (nsec): min=4097, max=34050, avg=4910.40, stdev=1119.63 00:13:37.436 clat (usec): min=195, max=9980, avg=2983.17, stdev=410.14 00:13:37.436 lat (usec): min=200, max=10001, avg=2988.09, stdev=410.70 00:13:37.436 clat percentiles (usec): 00:13:37.436 | 1.00th=[ 2671], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:13:37.436 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:13:37.436 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3687], 00:13:37.436 | 99.00th=[ 4228], 99.50th=[ 5407], 99.90th=[ 8455], 99.95th=[ 8586], 00:13:37.436 | 99.99th=[ 9634] 00:13:37.436 bw ( KiB/s): min=82664, max=85832, per=99.43%, avg=84717.33, stdev=1780.41, samples=3 00:13:37.436 iops : min=20666, max=21458, avg=21179.33, stdev=445.10, samples=3 00:13:37.436 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:37.436 lat (msec) : 2=0.18%, 4=98.02%, 10=1.76%, 20=0.01% 00:13:37.436 cpu : usr=99.20%, sys=0.15%, ctx=7, majf=0, minf=607 00:13:37.436 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:37.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:37.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:37.436 issued rwts: total=42957,42621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:37.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:37.436 00:13:37.436 Run status group 0 (all jobs): 00:13:37.436 READ: bw=83.9MiB/s (87.9MB/s), 83.9MiB/s-83.9MiB/s (87.9MB/s-87.9MB/s), io=168MiB (176MB), run=2001-2001msec 00:13:37.436 WRITE: bw=83.2MiB/s (87.2MB/s), 83.2MiB/s-83.2MiB/s (87.2MB/s-87.2MB/s), io=166MiB (175MB), run=2001-2001msec 00:13:37.436 ----------------------------------------------------- 00:13:37.436 Suppressions used: 00:13:37.436 count bytes template 00:13:37.436 1 32 /usr/src/fio/parse.c 00:13:37.436 1 8 libtcmalloc_minimal.so 00:13:37.436 ----------------------------------------------------- 00:13:37.436 00:13:37.436 10:18:44 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:37.436 10:18:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:37.436 10:18:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:37.436 10:18:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:37.436 10:18:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:37.436 10:18:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:37.712 10:18:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:37.712 10:18:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:37.712 10:18:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:37.971 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:37.971 fio-3.35 00:13:37.971 Starting 1 thread 00:13:42.164 00:13:42.164 test: (groupid=0, jobs=1): err= 0: pid=65589: Mon Nov 25 10:18:48 2024 00:13:42.164 read: IOPS=21.5k, BW=84.1MiB/s (88.2MB/s)(168MiB/2001msec) 00:13:42.164 slat (nsec): min=4073, max=51567, avg=4783.18, stdev=1138.25 00:13:42.164 clat (usec): min=610, max=10900, avg=2967.95, stdev=362.69 00:13:42.164 lat (usec): min=622, max=10952, avg=2972.73, stdev=363.20 00:13:42.164 clat percentiles (usec): 00:13:42.164 | 1.00th=[ 2737], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:13:42.164 | 30.00th=[ 2900], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:13:42.164 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3032], 95.00th=[ 3064], 00:13:42.164 | 99.00th=[ 3949], 99.50th=[ 5866], 99.90th=[ 8291], 99.95th=[ 8586], 00:13:42.164 | 99.99th=[10683] 00:13:42.164 bw ( KiB/s): min=82304, max=86840, per=99.11%, avg=85322.67, stdev=2614.25, samples=3 00:13:42.164 iops : min=20576, max=21710, avg=21330.67, stdev=653.56, samples=3 00:13:42.164 write: IOPS=21.4k, BW=83.4MiB/s (87.5MB/s)(167MiB/2001msec); 0 zone resets 00:13:42.164 slat (usec): min=4, max=102, avg= 4.96, stdev= 1.35 00:13:42.164 clat (usec): min=680, max=10800, avg=2974.73, stdev=376.86 00:13:42.164 lat (usec): min=692, max=10822, avg=2979.69, stdev=377.46 00:13:42.164 clat percentiles (usec): 00:13:42.164 | 1.00th=[ 2737], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:13:42.164 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:13:42.164 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3032], 95.00th=[ 3064], 00:13:42.164 | 99.00th=[ 4080], 99.50th=[ 6128], 99.90th=[ 8291], 99.95th=[ 8586], 00:13:42.164 | 99.99th=[10290] 00:13:42.164 bw ( KiB/s): min=82112, max=87424, per=100.00%, avg=85493.33, stdev=2938.14, samples=3 00:13:42.164 iops : min=20528, max=21856, avg=21373.33, stdev=734.53, samples=3 00:13:42.164 lat (usec) : 750=0.01%, 1000=0.01% 00:13:42.164 lat (msec) : 2=0.02%, 4=98.97%, 10=0.98%, 20=0.02% 00:13:42.164 cpu : usr=99.45%, sys=0.00%, ctx=2, majf=0, minf=607 00:13:42.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:42.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:42.164 issued rwts: total=43067,42745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:42.164 00:13:42.164 Run status group 0 (all jobs): 00:13:42.164 READ: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=168MiB (176MB), run=2001-2001msec 00:13:42.164 WRITE: bw=83.4MiB/s (87.5MB/s), 83.4MiB/s-83.4MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:13:42.164 ----------------------------------------------------- 00:13:42.164 Suppressions used: 00:13:42.164 count bytes template 00:13:42.164 1 32 /usr/src/fio/parse.c 00:13:42.164 1 8 libtcmalloc_minimal.so 00:13:42.164 ----------------------------------------------------- 00:13:42.164 00:13:42.164 10:18:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:42.164 10:18:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:42.164 10:18:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:42.164 10:18:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:42.164 10:18:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:42.164 10:18:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:42.423 10:18:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:42.423 10:18:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:42.423 10:18:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:42.681 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:42.681 fio-3.35 00:13:42.681 Starting 1 thread 00:13:47.948 00:13:47.948 test: (groupid=0, jobs=1): err= 0: pid=65651: Mon Nov 25 10:18:54 2024 00:13:47.948 read: IOPS=22.2k, BW=86.6MiB/s (90.8MB/s)(173MiB/2001msec) 00:13:47.948 slat (nsec): min=3796, max=61582, avg=4693.26, stdev=1056.33 00:13:47.948 clat (usec): min=190, max=11061, avg=2879.76, stdev=411.26 00:13:47.948 lat (usec): min=194, max=11115, avg=2884.46, stdev=411.50 00:13:47.948 clat percentiles (usec): 00:13:47.948 | 1.00th=[ 1696], 5.00th=[ 2311], 10.00th=[ 2671], 20.00th=[ 2802], 00:13:47.948 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2900], 00:13:47.948 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3163], 00:13:47.948 | 99.00th=[ 4228], 99.50th=[ 5014], 99.90th=[ 7439], 99.95th=[ 8160], 00:13:47.948 | 99.99th=[10814] 00:13:47.948 bw ( KiB/s): min=83000, max=92584, per=99.62%, avg=88360.00, stdev=4891.95, samples=3 00:13:47.948 iops : min=20750, max=23146, avg=22090.00, stdev=1222.99, samples=3 00:13:47.948 write: IOPS=22.0k, BW=86.0MiB/s (90.2MB/s)(172MiB/2001msec); 0 zone resets 00:13:47.948 slat (nsec): min=3970, max=72599, avg=4918.69, stdev=1141.22 00:13:47.948 clat (usec): min=260, max=10885, avg=2885.11, stdev=411.08 00:13:47.948 lat (usec): min=265, max=10906, avg=2890.03, stdev=411.30 00:13:47.948 clat percentiles (usec): 00:13:47.948 | 1.00th=[ 1729], 5.00th=[ 2311], 10.00th=[ 2704], 20.00th=[ 2802], 00:13:47.948 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:13:47.948 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3195], 00:13:47.948 | 99.00th=[ 4228], 99.50th=[ 4948], 99.90th=[ 7570], 99.95th=[ 8455], 00:13:47.948 | 99.99th=[10421] 00:13:47.948 bw ( KiB/s): min=83032, max=93256, per=100.00%, avg=88490.67, stdev=5147.14, samples=3 00:13:47.948 iops : min=20758, max=23314, avg=22122.67, stdev=1286.79, samples=3 00:13:47.948 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:13:47.948 lat (msec) : 2=2.18%, 4=96.44%, 10=1.29%, 20=0.02% 00:13:47.948 cpu : usr=99.20%, sys=0.20%, ctx=18, majf=0, minf=605 00:13:47.948 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:47.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:47.948 issued rwts: total=44371,44060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.948 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:47.948 00:13:47.948 Run status group 0 (all jobs): 00:13:47.948 READ: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=173MiB (182MB), run=2001-2001msec 00:13:47.948 WRITE: bw=86.0MiB/s (90.2MB/s), 86.0MiB/s-86.0MiB/s (90.2MB/s-90.2MB/s), io=172MiB (180MB), run=2001-2001msec 00:13:47.948 ----------------------------------------------------- 00:13:47.948 Suppressions used: 00:13:47.948 count bytes template 00:13:47.948 1 32 /usr/src/fio/parse.c 00:13:47.948 1 8 libtcmalloc_minimal.so 00:13:47.948 ----------------------------------------------------- 00:13:47.948 00:13:47.948 10:18:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:47.948 10:18:54 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:47.948 00:13:47.948 real 0m19.947s 00:13:47.948 user 0m15.916s 00:13:47.948 sys 0m3.336s 00:13:47.948 10:18:54 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.948 ************************************ 00:13:47.948 END TEST nvme_fio 00:13:47.948 ************************************ 00:13:47.948 10:18:54 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:47.948 00:13:47.948 real 1m35.260s 00:13:47.948 user 3m44.491s 00:13:47.948 sys 0m22.594s 00:13:47.948 10:18:54 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.948 ************************************ 00:13:47.948 END TEST nvme 00:13:47.948 ************************************ 00:13:47.948 10:18:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.948 10:18:55 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:47.948 10:18:55 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:47.948 10:18:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:47.948 10:18:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.948 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:13:47.948 ************************************ 00:13:47.948 START TEST nvme_scc 00:13:47.948 ************************************ 00:13:47.948 10:18:55 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:48.233 * Looking for test storage... 00:13:48.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:48.234 10:18:55 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:48.234 10:18:55 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:48.234 10:18:55 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:48.234 10:18:55 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:48.234 10:18:55 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.234 10:18:55 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.234 --rc genhtml_branch_coverage=1 00:13:48.234 --rc genhtml_function_coverage=1 00:13:48.234 --rc genhtml_legend=1 00:13:48.234 --rc geninfo_all_blocks=1 00:13:48.234 --rc geninfo_unexecuted_blocks=1 00:13:48.234 00:13:48.234 ' 00:13:48.234 10:18:55 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.234 --rc genhtml_branch_coverage=1 00:13:48.234 --rc genhtml_function_coverage=1 00:13:48.234 --rc genhtml_legend=1 00:13:48.234 --rc geninfo_all_blocks=1 00:13:48.234 --rc geninfo_unexecuted_blocks=1 00:13:48.234 00:13:48.234 ' 00:13:48.234 10:18:55 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.234 --rc genhtml_branch_coverage=1 00:13:48.234 --rc genhtml_function_coverage=1 00:13:48.234 --rc genhtml_legend=1 00:13:48.234 --rc geninfo_all_blocks=1 00:13:48.234 --rc geninfo_unexecuted_blocks=1 00:13:48.234 00:13:48.234 ' 00:13:48.234 10:18:55 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:48.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.234 --rc genhtml_branch_coverage=1 00:13:48.234 --rc genhtml_function_coverage=1 00:13:48.234 --rc genhtml_legend=1 00:13:48.234 --rc geninfo_all_blocks=1 00:13:48.234 --rc geninfo_unexecuted_blocks=1 00:13:48.234 00:13:48.234 ' 00:13:48.234 10:18:55 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.234 10:18:55 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.234 10:18:55 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.234 10:18:55 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.234 10:18:55 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.234 10:18:55 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:48.234 10:18:55 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:48.234 10:18:55 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:48.234 10:18:55 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.234 10:18:55 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:48.234 10:18:55 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:48.234 10:18:55 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:48.234 10:18:55 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:48.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:49.074 Waiting for block devices as requested 00:13:49.074 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:49.334 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:49.334 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:49.594 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:54.886 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:54.886 10:19:01 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:54.886 10:19:01 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:54.886 10:19:01 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:54.886 10:19:01 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:54.886 10:19:01 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.886 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.887 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.888 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:54.889 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:54.890 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.891 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:54.892 10:19:01 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:54.892 10:19:01 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:54.892 10:19:01 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:54.892 10:19:01 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.892 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:54.893 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:54.894 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.895 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.896 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.897 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.898 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:54.899 10:19:01 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:54.899 10:19:01 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:54.899 10:19:01 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:54.899 10:19:01 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.899 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.900 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.901 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:54.902 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.903 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.904 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:54.905 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.906 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:54.907 10:19:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:55.171 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:55.172 10:19:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.172 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.173 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:55.174 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.175 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:55.176 10:19:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:55.176 10:19:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:55.176 10:19:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:55.176 10:19:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:55.176 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.177 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.178 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:55.179 10:19:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:55.179 10:19:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:55.180 10:19:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:55.180 10:19:02 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:55.180 10:19:02 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:55.180 10:19:02 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:55.180 10:19:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:55.180 10:19:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:55.180 10:19:02 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:55.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:56.687 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.687 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.687 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.687 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:56.687 10:19:03 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:56.687 10:19:03 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:56.687 10:19:03 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.687 10:19:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:56.687 ************************************ 00:13:56.687 START TEST nvme_simple_copy 00:13:56.687 ************************************ 00:13:56.687 10:19:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:57.255 Initializing NVMe Controllers 00:13:57.255 Attaching to 0000:00:10.0 00:13:57.255 Controller supports SCC. Attached to 0000:00:10.0 00:13:57.255 Namespace ID: 1 size: 6GB 00:13:57.255 Initialization complete. 00:13:57.255 00:13:57.255 Controller QEMU NVMe Ctrl (12340 ) 00:13:57.255 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:57.255 Namespace Block Size:4096 00:13:57.255 Writing LBAs 0 to 63 with Random Data 00:13:57.255 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:57.255 LBAs matching Written Data: 64 00:13:57.255 00:13:57.255 real 0m0.313s 00:13:57.255 user 0m0.121s 00:13:57.255 sys 0m0.090s 00:13:57.255 10:19:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.255 ************************************ 00:13:57.255 END TEST nvme_simple_copy 00:13:57.255 ************************************ 00:13:57.255 10:19:04 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:57.255 ************************************ 00:13:57.255 END TEST nvme_scc 00:13:57.255 ************************************ 00:13:57.255 00:13:57.255 real 0m9.117s 00:13:57.255 user 0m1.595s 00:13:57.255 sys 0m2.444s 00:13:57.255 10:19:04 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.255 10:19:04 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:57.255 10:19:04 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:57.255 10:19:04 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:57.255 10:19:04 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:57.255 10:19:04 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:57.255 10:19:04 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:57.255 10:19:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.255 10:19:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.255 10:19:04 -- common/autotest_common.sh@10 -- # set +x 00:13:57.255 ************************************ 00:13:57.255 START TEST nvme_fdp 00:13:57.255 ************************************ 00:13:57.255 10:19:04 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:57.255 * Looking for test storage... 00:13:57.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:57.255 10:19:04 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:57.255 10:19:04 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:13:57.255 10:19:04 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:57.515 10:19:04 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:57.515 10:19:04 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.515 10:19:04 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:57.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.515 --rc genhtml_branch_coverage=1 00:13:57.515 --rc genhtml_function_coverage=1 00:13:57.515 --rc genhtml_legend=1 00:13:57.515 --rc geninfo_all_blocks=1 00:13:57.515 --rc geninfo_unexecuted_blocks=1 00:13:57.515 00:13:57.515 ' 00:13:57.515 10:19:04 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:57.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.515 --rc genhtml_branch_coverage=1 00:13:57.515 --rc genhtml_function_coverage=1 00:13:57.515 --rc genhtml_legend=1 00:13:57.515 --rc geninfo_all_blocks=1 00:13:57.515 --rc geninfo_unexecuted_blocks=1 00:13:57.515 00:13:57.515 ' 00:13:57.515 10:19:04 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:57.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.515 --rc genhtml_branch_coverage=1 00:13:57.515 --rc genhtml_function_coverage=1 00:13:57.515 --rc genhtml_legend=1 00:13:57.515 --rc geninfo_all_blocks=1 00:13:57.515 --rc geninfo_unexecuted_blocks=1 00:13:57.515 00:13:57.515 ' 00:13:57.515 10:19:04 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:57.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.515 --rc genhtml_branch_coverage=1 00:13:57.515 --rc genhtml_function_coverage=1 00:13:57.515 --rc genhtml_legend=1 00:13:57.515 --rc geninfo_all_blocks=1 00:13:57.515 --rc geninfo_unexecuted_blocks=1 00:13:57.515 00:13:57.515 ' 00:13:57.515 10:19:04 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:57.515 10:19:04 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:57.515 10:19:04 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:57.515 10:19:04 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:57.515 10:19:04 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.515 10:19:04 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.515 10:19:04 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.516 10:19:04 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.516 10:19:04 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.516 10:19:04 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:57.516 10:19:04 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.516 10:19:04 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:57.516 10:19:04 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:57.516 10:19:04 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:57.516 10:19:04 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:57.516 10:19:04 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:57.516 10:19:04 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:57.516 10:19:04 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:57.516 10:19:04 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:57.516 10:19:04 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:57.516 10:19:04 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:57.516 10:19:04 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:58.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:58.343 Waiting for block devices as requested 00:13:58.343 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.343 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.602 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.602 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:03.872 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:03.872 10:19:10 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:14:03.872 10:19:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:03.872 10:19:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:03.872 10:19:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:03.872 10:19:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.872 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:14:03.873 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.874 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.875 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:14:03.876 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:14:03.877 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.878 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.879 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:14:03.880 10:19:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:03.880 10:19:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:03.880 10:19:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:03.880 10:19:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:14:03.880 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:14:03.881 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:14:03.882 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.883 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:14:03.884 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:03.885 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:03.886 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:14:04.153 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:14:04.154 10:19:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:04.154 10:19:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:04.154 10:19:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:04.154 10:19:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.154 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:14:04.155 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.156 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:14:04.157 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.158 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.159 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.160 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:14:04.161 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:14:04.162 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:14:04.163 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:14:04.164 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:14:04.165 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.166 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.167 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.168 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:04.169 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:14:04.170 10:19:11 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:04.170 10:19:11 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:04.170 10:19:11 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:04.170 10:19:11 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.170 10:19:11 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:14:04.430 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.431 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.432 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:14:04.433 10:19:11 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:14:04.433 10:19:11 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:14:04.433 10:19:11 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:14:04.433 10:19:11 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:14:04.433 10:19:11 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:05.000 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:05.936 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.936 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.936 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.936 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.936 10:19:12 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:05.936 10:19:12 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:05.936 10:19:12 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.936 10:19:12 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:05.936 ************************************ 00:14:05.936 START TEST nvme_flexible_data_placement 00:14:05.936 ************************************ 00:14:05.936 10:19:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:06.195 Initializing NVMe Controllers 00:14:06.195 Attaching to 0000:00:13.0 00:14:06.195 Controller supports FDP Attached to 0000:00:13.0 00:14:06.195 Namespace ID: 1 Endurance Group ID: 1 00:14:06.195 Initialization complete. 00:14:06.195 00:14:06.195 ================================== 00:14:06.195 == FDP tests for Namespace: #01 == 00:14:06.195 ================================== 00:14:06.195 00:14:06.195 Get Feature: FDP: 00:14:06.195 ================= 00:14:06.195 Enabled: Yes 00:14:06.195 FDP configuration Index: 0 00:14:06.195 00:14:06.195 FDP configurations log page 00:14:06.195 =========================== 00:14:06.195 Number of FDP configurations: 1 00:14:06.195 Version: 0 00:14:06.195 Size: 112 00:14:06.195 FDP Configuration Descriptor: 0 00:14:06.195 Descriptor Size: 96 00:14:06.195 Reclaim Group Identifier format: 2 00:14:06.195 FDP Volatile Write Cache: Not Present 00:14:06.195 FDP Configuration: Valid 00:14:06.195 Vendor Specific Size: 0 00:14:06.195 Number of Reclaim Groups: 2 00:14:06.195 Number of Recalim Unit Handles: 8 00:14:06.195 Max Placement Identifiers: 128 00:14:06.195 Number of Namespaces Suppprted: 256 00:14:06.195 Reclaim unit Nominal Size: 6000000 bytes 00:14:06.195 Estimated Reclaim Unit Time Limit: Not Reported 00:14:06.195 RUH Desc #000: RUH Type: Initially Isolated 00:14:06.195 RUH Desc #001: RUH Type: Initially Isolated 00:14:06.195 RUH Desc #002: RUH Type: Initially Isolated 00:14:06.195 RUH Desc #003: RUH Type: Initially Isolated 00:14:06.195 RUH Desc #004: RUH Type: Initially Isolated 00:14:06.195 RUH Desc #005: RUH Type: Initially Isolated 00:14:06.195 RUH Desc #006: RUH Type: Initially Isolated 00:14:06.195 RUH Desc #007: RUH Type: Initially Isolated 00:14:06.195 00:14:06.195 FDP reclaim unit handle usage log page 00:14:06.195 ====================================== 00:14:06.195 Number of Reclaim Unit Handles: 8 00:14:06.195 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:06.195 RUH Usage Desc #001: RUH Attributes: Unused 00:14:06.195 RUH Usage Desc #002: RUH Attributes: Unused 00:14:06.195 RUH Usage Desc #003: RUH Attributes: Unused 00:14:06.195 RUH Usage Desc #004: RUH Attributes: Unused 00:14:06.195 RUH Usage Desc #005: RUH Attributes: Unused 00:14:06.195 RUH Usage Desc #006: RUH Attributes: Unused 00:14:06.195 RUH Usage Desc #007: RUH Attributes: Unused 00:14:06.195 00:14:06.195 FDP statistics log page 00:14:06.195 ======================= 00:14:06.195 Host bytes with metadata written: 931360768 00:14:06.195 Media bytes with metadata written: 931454976 00:14:06.195 Media bytes erased: 0 00:14:06.195 00:14:06.195 FDP Reclaim unit handle status 00:14:06.195 ============================== 00:14:06.195 Number of RUHS descriptors: 2 00:14:06.195 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000047c9 00:14:06.195 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:14:06.195 00:14:06.195 FDP write on placement id: 0 success 00:14:06.195 00:14:06.195 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:14:06.195 00:14:06.195 IO mgmt send: RUH update for Placement ID: #0 Success 00:14:06.195 00:14:06.195 Get Feature: FDP Events for Placement handle: #0 00:14:06.195 ======================== 00:14:06.195 Number of FDP Events: 6 00:14:06.195 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:14:06.195 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:14:06.195 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:14:06.195 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:14:06.195 FDP Event: #4 Type: Media Reallocated Enabled: No 00:14:06.195 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:14:06.195 00:14:06.195 FDP events log page 00:14:06.195 =================== 00:14:06.195 Number of FDP events: 1 00:14:06.195 FDP Event #0: 00:14:06.195 Event Type: RU Not Written to Capacity 00:14:06.195 Placement Identifier: Valid 00:14:06.195 NSID: Valid 00:14:06.195 Location: Valid 00:14:06.195 Placement Identifier: 0 00:14:06.195 Event Timestamp: 7 00:14:06.195 Namespace Identifier: 1 00:14:06.195 Reclaim Group Identifier: 0 00:14:06.195 Reclaim Unit Handle Identifier: 0 00:14:06.195 00:14:06.195 FDP test passed 00:14:06.195 00:14:06.195 real 0m0.296s 00:14:06.195 user 0m0.092s 00:14:06.195 sys 0m0.102s 00:14:06.195 10:19:13 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.195 ************************************ 00:14:06.195 END TEST nvme_flexible_data_placement 00:14:06.195 ************************************ 00:14:06.195 10:19:13 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:14:06.455 ************************************ 00:14:06.455 END TEST nvme_fdp 00:14:06.455 ************************************ 00:14:06.455 00:14:06.455 real 0m9.126s 00:14:06.455 user 0m1.631s 00:14:06.455 sys 0m2.518s 00:14:06.455 10:19:13 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.455 10:19:13 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:06.455 10:19:13 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:14:06.455 10:19:13 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:06.455 10:19:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:06.455 10:19:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.455 10:19:13 -- common/autotest_common.sh@10 -- # set +x 00:14:06.455 ************************************ 00:14:06.455 START TEST nvme_rpc 00:14:06.455 ************************************ 00:14:06.455 10:19:13 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:06.455 * Looking for test storage... 00:14:06.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:06.455 10:19:13 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:06.455 10:19:13 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.455 10:19:13 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.715 10:19:13 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.715 --rc genhtml_branch_coverage=1 00:14:06.715 --rc genhtml_function_coverage=1 00:14:06.715 --rc genhtml_legend=1 00:14:06.715 --rc geninfo_all_blocks=1 00:14:06.715 --rc geninfo_unexecuted_blocks=1 00:14:06.715 00:14:06.715 ' 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.715 --rc genhtml_branch_coverage=1 00:14:06.715 --rc genhtml_function_coverage=1 00:14:06.715 --rc genhtml_legend=1 00:14:06.715 --rc geninfo_all_blocks=1 00:14:06.715 --rc geninfo_unexecuted_blocks=1 00:14:06.715 00:14:06.715 ' 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.715 --rc genhtml_branch_coverage=1 00:14:06.715 --rc genhtml_function_coverage=1 00:14:06.715 --rc genhtml_legend=1 00:14:06.715 --rc geninfo_all_blocks=1 00:14:06.715 --rc geninfo_unexecuted_blocks=1 00:14:06.715 00:14:06.715 ' 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.715 --rc genhtml_branch_coverage=1 00:14:06.715 --rc genhtml_function_coverage=1 00:14:06.715 --rc genhtml_legend=1 00:14:06.715 --rc geninfo_all_blocks=1 00:14:06.715 --rc geninfo_unexecuted_blocks=1 00:14:06.715 00:14:06.715 ' 00:14:06.715 10:19:13 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.715 10:19:13 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:14:06.715 10:19:13 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:14:06.715 10:19:13 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67061 00:14:06.715 10:19:13 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:06.715 10:19:13 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:14:06.715 10:19:13 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67061 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67061 ']' 00:14:06.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.715 10:19:13 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.974 [2024-11-25 10:19:13.862129] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:14:06.974 [2024-11-25 10:19:13.862461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67061 ] 00:14:06.974 [2024-11-25 10:19:14.046058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:07.233 [2024-11-25 10:19:14.163480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.233 [2024-11-25 10:19:14.163549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.169 10:19:15 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.169 10:19:15 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:08.169 10:19:15 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:14:08.427 Nvme0n1 00:14:08.427 10:19:15 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:14:08.427 10:19:15 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:14:08.427 request: 00:14:08.427 { 00:14:08.427 "bdev_name": "Nvme0n1", 00:14:08.427 "filename": "non_existing_file", 00:14:08.427 "method": "bdev_nvme_apply_firmware", 00:14:08.427 "req_id": 1 00:14:08.427 } 00:14:08.427 Got JSON-RPC error response 00:14:08.427 response: 00:14:08.427 { 00:14:08.427 "code": -32603, 00:14:08.427 "message": "open file failed." 00:14:08.427 } 00:14:08.427 10:19:15 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:14:08.427 10:19:15 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:14:08.427 10:19:15 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:08.686 10:19:15 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:08.686 10:19:15 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67061 00:14:08.686 10:19:15 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67061 ']' 00:14:08.686 10:19:15 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67061 00:14:08.686 10:19:15 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:08.686 10:19:15 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.686 10:19:15 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67061 00:14:08.686 killing process with pid 67061 00:14:08.686 10:19:15 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.686 10:19:15 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.686 10:19:15 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67061' 00:14:08.686 10:19:15 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67061 00:14:08.686 10:19:15 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67061 00:14:11.238 ************************************ 00:14:11.238 END TEST nvme_rpc 00:14:11.238 ************************************ 00:14:11.238 00:14:11.238 real 0m4.667s 00:14:11.238 user 0m8.532s 00:14:11.238 sys 0m0.806s 00:14:11.238 10:19:18 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.238 10:19:18 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.238 10:19:18 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:11.238 10:19:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.238 10:19:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.238 10:19:18 -- common/autotest_common.sh@10 -- # set +x 00:14:11.238 ************************************ 00:14:11.238 START TEST nvme_rpc_timeouts 00:14:11.238 ************************************ 00:14:11.238 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:11.238 * Looking for test storage... 00:14:11.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:11.238 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:11.238 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:14:11.238 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:11.238 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:14:11.238 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.496 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:14:11.496 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.496 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:14:11.496 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:14:11.496 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.496 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:14:11.496 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.496 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.496 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.496 10:19:18 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.496 --rc genhtml_branch_coverage=1 00:14:11.496 --rc genhtml_function_coverage=1 00:14:11.496 --rc genhtml_legend=1 00:14:11.496 --rc geninfo_all_blocks=1 00:14:11.496 --rc geninfo_unexecuted_blocks=1 00:14:11.496 00:14:11.496 ' 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.496 --rc genhtml_branch_coverage=1 00:14:11.496 --rc genhtml_function_coverage=1 00:14:11.496 --rc genhtml_legend=1 00:14:11.496 --rc geninfo_all_blocks=1 00:14:11.496 --rc geninfo_unexecuted_blocks=1 00:14:11.496 00:14:11.496 ' 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.496 --rc genhtml_branch_coverage=1 00:14:11.496 --rc genhtml_function_coverage=1 00:14:11.496 --rc genhtml_legend=1 00:14:11.496 --rc geninfo_all_blocks=1 00:14:11.496 --rc geninfo_unexecuted_blocks=1 00:14:11.496 00:14:11.496 ' 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:11.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.496 --rc genhtml_branch_coverage=1 00:14:11.496 --rc genhtml_function_coverage=1 00:14:11.496 --rc genhtml_legend=1 00:14:11.496 --rc geninfo_all_blocks=1 00:14:11.496 --rc geninfo_unexecuted_blocks=1 00:14:11.496 00:14:11.496 ' 00:14:11.496 10:19:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:11.496 10:19:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67138 00:14:11.496 10:19:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67138 00:14:11.496 10:19:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67170 00:14:11.496 10:19:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:11.496 10:19:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:14:11.496 10:19:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67170 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67170 ']' 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.496 10:19:18 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:11.496 [2024-11-25 10:19:18.468597] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:14:11.496 [2024-11-25 10:19:18.468906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67170 ] 00:14:11.755 [2024-11-25 10:19:18.652626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:11.755 [2024-11-25 10:19:18.768987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.755 [2024-11-25 10:19:18.769027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.686 Checking default timeout settings: 00:14:12.686 10:19:19 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.686 10:19:19 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:14:12.687 10:19:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:14:12.687 10:19:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:12.944 Making settings changes with rpc: 00:14:12.945 10:19:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:14:12.945 10:19:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:14:13.203 Check default vs. modified settings: 00:14:13.203 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:14:13.203 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67138 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67138 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:13.463 Setting action_on_timeout is changed as expected. 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67138 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67138 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:13.463 Setting timeout_us is changed as expected. 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67138 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:13.463 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:13.722 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:13.722 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67138 00:14:13.722 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:13.722 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:13.722 Setting timeout_admin_us is changed as expected. 00:14:13.722 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:14:13.722 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:14:13.722 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:14:13.722 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:14:13.722 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67138 /tmp/settings_modified_67138 00:14:13.722 10:19:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67170 00:14:13.722 10:19:20 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67170 ']' 00:14:13.722 10:19:20 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67170 00:14:13.722 10:19:20 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:14:13.722 10:19:20 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.722 10:19:20 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67170 00:14:13.722 killing process with pid 67170 00:14:13.722 10:19:20 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.722 10:19:20 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.722 10:19:20 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67170' 00:14:13.722 10:19:20 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67170 00:14:13.722 10:19:20 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67170 00:14:16.260 RPC TIMEOUT SETTING TEST PASSED. 00:14:16.260 10:19:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:14:16.260 ************************************ 00:14:16.260 END TEST nvme_rpc_timeouts 00:14:16.260 ************************************ 00:14:16.260 00:14:16.260 real 0m4.938s 00:14:16.260 user 0m9.302s 00:14:16.260 sys 0m0.814s 00:14:16.260 10:19:23 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.260 10:19:23 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:16.260 10:19:23 -- spdk/autotest.sh@239 -- # uname -s 00:14:16.260 10:19:23 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:14:16.260 10:19:23 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:16.260 10:19:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:16.260 10:19:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.260 10:19:23 -- common/autotest_common.sh@10 -- # set +x 00:14:16.260 ************************************ 00:14:16.260 START TEST sw_hotplug 00:14:16.260 ************************************ 00:14:16.260 10:19:23 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:16.260 * Looking for test storage... 00:14:16.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:16.260 10:19:23 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:16.260 10:19:23 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:14:16.260 10:19:23 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:16.260 10:19:23 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.260 10:19:23 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.519 10:19:23 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:14:16.519 10:19:23 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:14:16.519 10:19:23 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.519 10:19:23 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.519 10:19:23 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:14:16.519 10:19:23 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:14:16.519 10:19:23 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.520 10:19:23 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:14:16.520 10:19:23 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.520 10:19:23 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:14:16.520 10:19:23 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:14:16.520 10:19:23 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.520 10:19:23 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:14:16.520 10:19:23 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.520 10:19:23 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.520 10:19:23 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.520 10:19:23 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:14:16.520 10:19:23 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.520 10:19:23 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:16.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.520 --rc genhtml_branch_coverage=1 00:14:16.520 --rc genhtml_function_coverage=1 00:14:16.520 --rc genhtml_legend=1 00:14:16.520 --rc geninfo_all_blocks=1 00:14:16.520 --rc geninfo_unexecuted_blocks=1 00:14:16.520 00:14:16.520 ' 00:14:16.520 10:19:23 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:16.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.520 --rc genhtml_branch_coverage=1 00:14:16.520 --rc genhtml_function_coverage=1 00:14:16.520 --rc genhtml_legend=1 00:14:16.520 --rc geninfo_all_blocks=1 00:14:16.520 --rc geninfo_unexecuted_blocks=1 00:14:16.520 00:14:16.520 ' 00:14:16.520 10:19:23 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:16.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.520 --rc genhtml_branch_coverage=1 00:14:16.520 --rc genhtml_function_coverage=1 00:14:16.520 --rc genhtml_legend=1 00:14:16.520 --rc geninfo_all_blocks=1 00:14:16.520 --rc geninfo_unexecuted_blocks=1 00:14:16.520 00:14:16.520 ' 00:14:16.520 10:19:23 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:16.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.520 --rc genhtml_branch_coverage=1 00:14:16.520 --rc genhtml_function_coverage=1 00:14:16.520 --rc genhtml_legend=1 00:14:16.520 --rc geninfo_all_blocks=1 00:14:16.520 --rc geninfo_unexecuted_blocks=1 00:14:16.520 00:14:16.520 ' 00:14:16.520 10:19:23 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:17.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:17.087 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:17.087 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:17.087 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:17.087 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:17.087 10:19:24 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:14:17.087 10:19:24 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:14:17.087 10:19:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:14:17.087 10:19:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@233 -- # local class 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:14:17.087 10:19:24 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:14:17.346 10:19:24 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:17.346 10:19:24 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:14:17.346 10:19:24 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:14:17.347 10:19:24 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:17.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:17.864 Waiting for block devices as requested 00:14:18.123 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:18.123 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:18.382 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:18.382 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:23.690 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:23.690 10:19:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:14:23.690 10:19:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:23.948 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:14:24.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:24.207 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:14:24.466 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:14:25.033 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:25.033 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:25.033 10:19:31 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:14:25.033 10:19:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68066 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:14:25.033 10:19:32 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:25.033 10:19:32 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:25.033 10:19:32 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:25.033 10:19:32 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:25.033 10:19:32 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:25.033 10:19:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:25.291 Initializing NVMe Controllers 00:14:25.291 Attaching to 0000:00:10.0 00:14:25.291 Attaching to 0000:00:11.0 00:14:25.291 Attached to 0000:00:10.0 00:14:25.291 Attached to 0000:00:11.0 00:14:25.291 Initialization complete. Starting I/O... 00:14:25.291 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:25.291 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:14:25.291 00:14:26.229 QEMU NVMe Ctrl (12340 ): 1288 I/Os completed (+1288) 00:14:26.229 QEMU NVMe Ctrl (12341 ): 1292 I/Os completed (+1292) 00:14:26.229 00:14:27.603 QEMU NVMe Ctrl (12340 ): 3236 I/Os completed (+1948) 00:14:27.603 QEMU NVMe Ctrl (12341 ): 3240 I/Os completed (+1948) 00:14:27.604 00:14:28.537 QEMU NVMe Ctrl (12340 ): 5416 I/Os completed (+2180) 00:14:28.537 QEMU NVMe Ctrl (12341 ): 5420 I/Os completed (+2180) 00:14:28.537 00:14:29.475 QEMU NVMe Ctrl (12340 ): 7616 I/Os completed (+2200) 00:14:29.475 QEMU NVMe Ctrl (12341 ): 7620 I/Os completed (+2200) 00:14:29.475 00:14:30.423 QEMU NVMe Ctrl (12340 ): 9828 I/Os completed (+2212) 00:14:30.423 QEMU NVMe Ctrl (12341 ): 9834 I/Os completed (+2214) 00:14:30.423 00:14:30.991 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:30.991 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.991 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.991 [2024-11-25 10:19:38.092519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:30.991 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:30.991 [2024-11-25 10:19:38.094479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.991 [2024-11-25 10:19:38.094546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.991 [2024-11-25 10:19:38.094571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.991 [2024-11-25 10:19:38.094596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.991 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:30.991 [2024-11-25 10:19:38.097601] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.991 [2024-11-25 10:19:38.097655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.991 [2024-11-25 10:19:38.097675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.991 [2024-11-25 10:19:38.097694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.251 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:31.251 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:31.251 [2024-11-25 10:19:38.125079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:31.251 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:31.251 [2024-11-25 10:19:38.127275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.251 [2024-11-25 10:19:38.127331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.251 [2024-11-25 10:19:38.127362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.251 [2024-11-25 10:19:38.127389] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.251 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:31.251 [2024-11-25 10:19:38.130415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.251 [2024-11-25 10:19:38.130464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.251 [2024-11-25 10:19:38.130487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.251 [2024-11-25 10:19:38.130527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.251 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:31.251 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:31.251 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:31.251 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:31.251 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:31.251 00:14:31.251 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:31.510 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:31.510 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:31.510 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:31.510 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:31.510 Attaching to 0000:00:10.0 00:14:31.510 Attached to 0000:00:10.0 00:14:31.510 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:31.510 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:31.510 10:19:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:31.510 Attaching to 0000:00:11.0 00:14:31.510 Attached to 0000:00:11.0 00:14:32.447 QEMU NVMe Ctrl (12340 ): 1828 I/Os completed (+1828) 00:14:32.447 QEMU NVMe Ctrl (12341 ): 1644 I/Os completed (+1644) 00:14:32.447 00:14:33.384 QEMU NVMe Ctrl (12340 ): 3876 I/Os completed (+2048) 00:14:33.384 QEMU NVMe Ctrl (12341 ): 3692 I/Os completed (+2048) 00:14:33.384 00:14:34.322 QEMU NVMe Ctrl (12340 ): 5908 I/Os completed (+2032) 00:14:34.322 QEMU NVMe Ctrl (12341 ): 5726 I/Os completed (+2034) 00:14:34.322 00:14:35.258 QEMU NVMe Ctrl (12340 ): 7960 I/Os completed (+2052) 00:14:35.258 QEMU NVMe Ctrl (12341 ): 7779 I/Os completed (+2053) 00:14:35.258 00:14:36.635 QEMU NVMe Ctrl (12340 ): 10016 I/Os completed (+2056) 00:14:36.635 QEMU NVMe Ctrl (12341 ): 9835 I/Os completed (+2056) 00:14:36.635 00:14:37.203 QEMU NVMe Ctrl (12340 ): 12096 I/Os completed (+2080) 00:14:37.203 QEMU NVMe Ctrl (12341 ): 11915 I/Os completed (+2080) 00:14:37.203 00:14:38.582 QEMU NVMe Ctrl (12340 ): 14228 I/Os completed (+2132) 00:14:38.582 QEMU NVMe Ctrl (12341 ): 14047 I/Os completed (+2132) 00:14:38.582 00:14:39.521 QEMU NVMe Ctrl (12340 ): 16360 I/Os completed (+2132) 00:14:39.521 QEMU NVMe Ctrl (12341 ): 16179 I/Os completed (+2132) 00:14:39.521 00:14:40.458 QEMU NVMe Ctrl (12340 ): 18348 I/Os completed (+1988) 00:14:40.458 QEMU NVMe Ctrl (12341 ): 18218 I/Os completed (+2039) 00:14:40.458 00:14:41.395 QEMU NVMe Ctrl (12340 ): 20468 I/Os completed (+2120) 00:14:41.395 QEMU NVMe Ctrl (12341 ): 20338 I/Os completed (+2120) 00:14:41.395 00:14:42.332 QEMU NVMe Ctrl (12340 ): 22676 I/Os completed (+2208) 00:14:42.332 QEMU NVMe Ctrl (12341 ): 22546 I/Os completed (+2208) 00:14:42.332 00:14:43.269 QEMU NVMe Ctrl (12340 ): 24868 I/Os completed (+2192) 00:14:43.269 QEMU NVMe Ctrl (12341 ): 24738 I/Os completed (+2192) 00:14:43.269 00:14:43.530 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:43.530 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:43.530 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:43.530 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:43.530 [2024-11-25 10:19:50.475520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:43.530 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:43.530 [2024-11-25 10:19:50.477441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.530 [2024-11-25 10:19:50.477523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.530 [2024-11-25 10:19:50.477546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.530 [2024-11-25 10:19:50.477569] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.530 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:43.530 [2024-11-25 10:19:50.480510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.530 [2024-11-25 10:19:50.480562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.530 [2024-11-25 10:19:50.480580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.531 [2024-11-25 10:19:50.480599] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.531 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:14:43.531 EAL: Scan for (pci) bus failed. 00:14:43.531 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:43.531 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:43.531 [2024-11-25 10:19:50.513351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:43.531 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:43.531 [2024-11-25 10:19:50.515440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.531 [2024-11-25 10:19:50.515520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.531 [2024-11-25 10:19:50.515551] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.531 [2024-11-25 10:19:50.515570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.531 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:43.531 [2024-11-25 10:19:50.518195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.531 [2024-11-25 10:19:50.518246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.531 [2024-11-25 10:19:50.518278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.531 [2024-11-25 10:19:50.518311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.531 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:43.531 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:43.531 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:43.531 EAL: Scan for (pci) bus failed. 00:14:43.531 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:43.531 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:43.531 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:43.791 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:43.791 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:43.791 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:43.791 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:43.791 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:43.791 Attaching to 0000:00:10.0 00:14:43.791 Attached to 0000:00:10.0 00:14:43.791 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:43.791 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:43.791 10:19:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:43.791 Attaching to 0000:00:11.0 00:14:43.791 Attached to 0000:00:11.0 00:14:44.362 QEMU NVMe Ctrl (12340 ): 1188 I/Os completed (+1188) 00:14:44.362 QEMU NVMe Ctrl (12341 ): 968 I/Os completed (+968) 00:14:44.362 00:14:45.296 QEMU NVMe Ctrl (12340 ): 3356 I/Os completed (+2168) 00:14:45.296 QEMU NVMe Ctrl (12341 ): 3136 I/Os completed (+2168) 00:14:45.296 00:14:46.231 QEMU NVMe Ctrl (12340 ): 5540 I/Os completed (+2184) 00:14:46.231 QEMU NVMe Ctrl (12341 ): 5320 I/Os completed (+2184) 00:14:46.231 00:14:47.607 QEMU NVMe Ctrl (12340 ): 7688 I/Os completed (+2148) 00:14:47.607 QEMU NVMe Ctrl (12341 ): 7468 I/Os completed (+2148) 00:14:47.607 00:14:48.545 QEMU NVMe Ctrl (12340 ): 9872 I/Os completed (+2184) 00:14:48.545 QEMU NVMe Ctrl (12341 ): 9652 I/Os completed (+2184) 00:14:48.545 00:14:49.480 QEMU NVMe Ctrl (12340 ): 12044 I/Os completed (+2172) 00:14:49.480 QEMU NVMe Ctrl (12341 ): 11824 I/Os completed (+2172) 00:14:49.480 00:14:50.415 QEMU NVMe Ctrl (12340 ): 14248 I/Os completed (+2204) 00:14:50.415 QEMU NVMe Ctrl (12341 ): 14028 I/Os completed (+2204) 00:14:50.415 00:14:51.355 QEMU NVMe Ctrl (12340 ): 16444 I/Os completed (+2196) 00:14:51.356 QEMU NVMe Ctrl (12341 ): 16224 I/Os completed (+2196) 00:14:51.356 00:14:52.292 QEMU NVMe Ctrl (12340 ): 18620 I/Os completed (+2176) 00:14:52.292 QEMU NVMe Ctrl (12341 ): 18402 I/Os completed (+2178) 00:14:52.292 00:14:53.225 QEMU NVMe Ctrl (12340 ): 20824 I/Os completed (+2204) 00:14:53.225 QEMU NVMe Ctrl (12341 ): 20606 I/Os completed (+2204) 00:14:53.225 00:14:54.596 QEMU NVMe Ctrl (12340 ): 23024 I/Os completed (+2200) 00:14:54.596 QEMU NVMe Ctrl (12341 ): 22806 I/Os completed (+2200) 00:14:54.596 00:14:55.531 QEMU NVMe Ctrl (12340 ): 25220 I/Os completed (+2196) 00:14:55.531 QEMU NVMe Ctrl (12341 ): 25002 I/Os completed (+2196) 00:14:55.531 00:14:55.793 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:55.793 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:55.793 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:55.793 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:55.793 [2024-11-25 10:20:02.846220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:55.793 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:55.793 [2024-11-25 10:20:02.848007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.848068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.848089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.848114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:55.793 [2024-11-25 10:20:02.851212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.851262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.851281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.851303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:55.793 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:55.793 [2024-11-25 10:20:02.883484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:55.793 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:55.793 [2024-11-25 10:20:02.885072] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.885130] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.885153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.885174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:55.793 [2024-11-25 10:20:02.887780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.887827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.887850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 [2024-11-25 10:20:02.887867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:55.793 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:55.793 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:55.793 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:55.793 EAL: Scan for (pci) bus failed. 00:14:56.052 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:56.052 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:56.052 10:20:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:56.052 10:20:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:56.052 10:20:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:56.052 10:20:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:56.052 10:20:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:56.052 10:20:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:56.052 Attaching to 0000:00:10.0 00:14:56.052 Attached to 0000:00:10.0 00:14:56.311 10:20:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:56.311 10:20:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:56.311 10:20:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:56.311 Attaching to 0000:00:11.0 00:14:56.311 Attached to 0000:00:11.0 00:14:56.311 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:56.311 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:56.311 [2024-11-25 10:20:03.220431] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:15:08.516 10:20:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:08.516 10:20:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:08.516 10:20:15 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.12 00:15:08.516 10:20:15 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.12 00:15:08.516 10:20:15 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:08.516 10:20:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.12 00:15:08.516 10:20:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.12 2 00:15:08.516 remove_attach_helper took 43.12s to complete (handling 2 nvme drive(s)) 10:20:15 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:15:15.107 10:20:21 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68066 00:15:15.107 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68066) - No such process 00:15:15.107 10:20:21 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68066 00:15:15.107 10:20:21 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:15:15.107 10:20:21 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:15:15.107 10:20:21 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:15:15.107 10:20:21 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68609 00:15:15.107 10:20:21 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.107 10:20:21 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:15:15.107 10:20:21 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68609 00:15:15.107 10:20:21 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68609 ']' 00:15:15.107 10:20:21 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.107 10:20:21 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.107 10:20:21 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.107 10:20:21 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.107 10:20:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:15.107 [2024-11-25 10:20:21.332560] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:15:15.107 [2024-11-25 10:20:21.332684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68609 ] 00:15:15.107 [2024-11-25 10:20:21.517073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.107 [2024-11-25 10:20:21.649778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.367 10:20:22 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.367 10:20:22 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:15:15.367 10:20:22 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:15.367 10:20:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.367 10:20:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:15.625 10:20:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.625 10:20:22 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:15:15.625 10:20:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:15.625 10:20:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:15.625 10:20:22 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:15.625 10:20:22 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:15.625 10:20:22 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:15.625 10:20:22 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:15.625 10:20:22 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:15.625 10:20:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:15.625 10:20:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:15.625 10:20:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:15.625 10:20:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:15.625 10:20:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:22.194 10:20:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.194 10:20:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:22.194 [2024-11-25 10:20:28.565774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:22.194 [2024-11-25 10:20:28.568186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.194 [2024-11-25 10:20:28.568232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.194 [2024-11-25 10:20:28.568252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.194 [2024-11-25 10:20:28.568279] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.194 [2024-11-25 10:20:28.568291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.194 [2024-11-25 10:20:28.568305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.194 [2024-11-25 10:20:28.568318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.194 [2024-11-25 10:20:28.568332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.194 [2024-11-25 10:20:28.568343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.194 [2024-11-25 10:20:28.568362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.194 [2024-11-25 10:20:28.568373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.194 [2024-11-25 10:20:28.568387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.194 10:20:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:22.194 10:20:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:22.194 [2024-11-25 10:20:28.965154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:22.194 [2024-11-25 10:20:28.967710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.194 [2024-11-25 10:20:28.967752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.194 [2024-11-25 10:20:28.967772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.194 [2024-11-25 10:20:28.967796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.194 [2024-11-25 10:20:28.967810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.194 [2024-11-25 10:20:28.967822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.195 [2024-11-25 10:20:28.967838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.195 [2024-11-25 10:20:28.967849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.195 [2024-11-25 10:20:28.967863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.195 [2024-11-25 10:20:28.967875] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.195 [2024-11-25 10:20:28.967889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.195 [2024-11-25 10:20:28.967900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:22.195 10:20:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:22.195 10:20:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:22.195 10:20:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:22.195 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:22.453 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:22.453 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:22.453 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:22.453 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:22.453 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:22.453 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:22.453 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:22.453 10:20:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:34.656 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:34.656 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:34.656 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:34.657 10:20:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.657 10:20:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:34.657 10:20:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:34.657 10:20:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.657 10:20:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:34.657 [2024-11-25 10:20:41.644763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:34.657 [2024-11-25 10:20:41.647138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:34.657 [2024-11-25 10:20:41.647195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.657 [2024-11-25 10:20:41.647215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.657 [2024-11-25 10:20:41.647241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:34.657 [2024-11-25 10:20:41.647253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.657 [2024-11-25 10:20:41.647267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.657 [2024-11-25 10:20:41.647280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:34.657 [2024-11-25 10:20:41.647293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.657 [2024-11-25 10:20:41.647305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.657 [2024-11-25 10:20:41.647320] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:34.657 [2024-11-25 10:20:41.647331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.657 [2024-11-25 10:20:41.647344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.657 10:20:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:34.657 10:20:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:35.225 [2024-11-25 10:20:42.044091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:35.225 [2024-11-25 10:20:42.046399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.225 [2024-11-25 10:20:42.046438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.225 [2024-11-25 10:20:42.046461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.225 [2024-11-25 10:20:42.046483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.225 [2024-11-25 10:20:42.046519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.225 [2024-11-25 10:20:42.046533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.225 [2024-11-25 10:20:42.046549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.225 [2024-11-25 10:20:42.046560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.225 [2024-11-25 10:20:42.046574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.225 [2024-11-25 10:20:42.046587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:35.225 [2024-11-25 10:20:42.046600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.225 [2024-11-25 10:20:42.046612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.225 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:35.225 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:35.225 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:35.225 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:35.225 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:35.225 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:35.225 10:20:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.225 10:20:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:35.225 10:20:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.225 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:35.225 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:35.484 10:20:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:47.688 10:20:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.688 10:20:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:47.688 10:20:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:47.688 10:20:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.688 10:20:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:47.688 [2024-11-25 10:20:54.723744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:47.688 [2024-11-25 10:20:54.726221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.688 [2024-11-25 10:20:54.726266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.688 [2024-11-25 10:20:54.726283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.688 [2024-11-25 10:20:54.726308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.688 [2024-11-25 10:20:54.726319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.688 [2024-11-25 10:20:54.726336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.688 [2024-11-25 10:20:54.726349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.688 [2024-11-25 10:20:54.726362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.688 [2024-11-25 10:20:54.726374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.688 [2024-11-25 10:20:54.726388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:47.688 [2024-11-25 10:20:54.726399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.688 [2024-11-25 10:20:54.726413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.688 10:20:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:47.688 10:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:48.253 [2024-11-25 10:20:55.123108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:48.253 [2024-11-25 10:20:55.125686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.253 [2024-11-25 10:20:55.125729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.253 [2024-11-25 10:20:55.125749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.253 [2024-11-25 10:20:55.125772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.253 [2024-11-25 10:20:55.125786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.253 [2024-11-25 10:20:55.125798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.253 [2024-11-25 10:20:55.125813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.253 [2024-11-25 10:20:55.125824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.253 [2024-11-25 10:20:55.125841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.253 [2024-11-25 10:20:55.125853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:48.253 [2024-11-25 10:20:55.125867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.253 [2024-11-25 10:20:55.125878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.253 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:48.253 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:48.253 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:48.253 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:48.253 10:20:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.253 10:20:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:48.253 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:48.253 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:48.253 10:20:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.253 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:48.253 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:48.510 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:48.510 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:48.510 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:48.510 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:48.510 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:48.510 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:48.510 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:48.510 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:48.510 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:48.766 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:48.766 10:20:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.20 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.20 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:16:00.962 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:00.962 10:21:07 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:00.962 10:21:07 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:07.519 10:21:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.519 10:21:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:07.519 [2024-11-25 10:21:13.803355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:07.519 [2024-11-25 10:21:13.806251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.519 [2024-11-25 10:21:13.806415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.519 [2024-11-25 10:21:13.806549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.519 [2024-11-25 10:21:13.806674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.519 [2024-11-25 10:21:13.806822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.519 [2024-11-25 10:21:13.806928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.519 [2024-11-25 10:21:13.807040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.519 [2024-11-25 10:21:13.807082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.519 [2024-11-25 10:21:13.807181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.519 [2024-11-25 10:21:13.807291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.519 [2024-11-25 10:21:13.807331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.519 [2024-11-25 10:21:13.807444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.519 10:21:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:07.519 10:21:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:07.519 [2024-11-25 10:21:14.302569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:07.519 [2024-11-25 10:21:14.305164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.519 [2024-11-25 10:21:14.305215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.519 [2024-11-25 10:21:14.305236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.519 [2024-11-25 10:21:14.305260] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.519 [2024-11-25 10:21:14.305276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.519 [2024-11-25 10:21:14.305289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.519 [2024-11-25 10:21:14.305306] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.519 [2024-11-25 10:21:14.305318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.519 [2024-11-25 10:21:14.305333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.519 [2024-11-25 10:21:14.305347] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:07.519 [2024-11-25 10:21:14.305362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.519 [2024-11-25 10:21:14.305374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:07.519 10:21:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.519 10:21:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:07.519 10:21:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:07.519 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:07.777 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:07.777 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:07.777 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:07.777 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:07.777 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:07.777 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:07.777 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:07.777 10:21:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:19.979 10:21:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:19.979 10:21:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:19.979 10:21:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:19.979 10:21:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.979 10:21:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:19.979 [2024-11-25 10:21:26.882325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:19.979 [2024-11-25 10:21:26.884170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.979 [2024-11-25 10:21:26.884226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.979 [2024-11-25 10:21:26.884245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.979 [2024-11-25 10:21:26.884272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.979 [2024-11-25 10:21:26.884285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.979 [2024-11-25 10:21:26.884301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.979 [2024-11-25 10:21:26.884315] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.979 [2024-11-25 10:21:26.884332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.979 [2024-11-25 10:21:26.884344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.979 [2024-11-25 10:21:26.884360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:19.979 [2024-11-25 10:21:26.884372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.979 [2024-11-25 10:21:26.884387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.979 10:21:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:19.979 10:21:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:20.237 [2024-11-25 10:21:27.281676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:20.237 [2024-11-25 10:21:27.284208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.237 [2024-11-25 10:21:27.284249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.237 [2024-11-25 10:21:27.284269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.237 [2024-11-25 10:21:27.284292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.237 [2024-11-25 10:21:27.284310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.237 [2024-11-25 10:21:27.284322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.237 [2024-11-25 10:21:27.284338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.237 [2024-11-25 10:21:27.284351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.237 [2024-11-25 10:21:27.284366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.237 [2024-11-25 10:21:27.284380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:20.237 [2024-11-25 10:21:27.284394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:20.237 [2024-11-25 10:21:27.284406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:20.495 10:21:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.495 10:21:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:20.495 10:21:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:20.495 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:20.753 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:20.753 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:20.753 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:20.753 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:20.753 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:20.753 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:20.753 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:20.753 10:21:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:32.956 10:21:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.956 10:21:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:32.956 10:21:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:32.956 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:32.956 10:21:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.956 10:21:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:32.956 [2024-11-25 10:21:39.961318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:32.956 [2024-11-25 10:21:39.963342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:32.956 [2024-11-25 10:21:39.963504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.956 [2024-11-25 10:21:39.963622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.956 [2024-11-25 10:21:39.963697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:32.956 [2024-11-25 10:21:39.963781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.956 [2024-11-25 10:21:39.963848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.956 [2024-11-25 10:21:39.963943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:32.956 [2024-11-25 10:21:39.964051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.956 [2024-11-25 10:21:39.964197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.957 [2024-11-25 10:21:39.964301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:32.957 [2024-11-25 10:21:39.964340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.957 [2024-11-25 10:21:39.964567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.957 10:21:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.957 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:32.957 10:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:33.524 [2024-11-25 10:21:40.360685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:33.524 [2024-11-25 10:21:40.363377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:33.524 [2024-11-25 10:21:40.363580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.524 [2024-11-25 10:21:40.363755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.524 [2024-11-25 10:21:40.363880] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:33.524 [2024-11-25 10:21:40.363926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.524 [2024-11-25 10:21:40.364039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.524 [2024-11-25 10:21:40.364100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:33.524 [2024-11-25 10:21:40.364135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.524 [2024-11-25 10:21:40.364254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.524 [2024-11-25 10:21:40.364310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:33.524 [2024-11-25 10:21:40.364350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.524 [2024-11-25 10:21:40.364454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.524 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:33.524 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:33.524 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:33.524 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:33.524 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:33.524 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:33.524 10:21:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.524 10:21:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:33.524 10:21:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.524 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:33.524 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:33.783 10:21:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.20 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.20 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:16:46.037 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:46.037 10:21:52 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68609 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68609 ']' 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68609 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68609 00:16:46.037 killing process with pid 68609 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68609' 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68609 00:16:46.037 10:21:52 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68609 00:16:48.571 10:21:55 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:48.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:49.398 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:49.399 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:49.657 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:49.657 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:49.657 00:16:49.657 real 2m33.535s 00:16:49.657 user 1m51.196s 00:16:49.657 sys 0m22.576s 00:16:49.657 ************************************ 00:16:49.657 END TEST sw_hotplug 00:16:49.657 ************************************ 00:16:49.657 10:21:56 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.657 10:21:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:49.657 10:21:56 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:49.657 10:21:56 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:49.657 10:21:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:49.657 10:21:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.657 10:21:56 -- common/autotest_common.sh@10 -- # set +x 00:16:49.657 ************************************ 00:16:49.657 START TEST nvme_xnvme 00:16:49.657 ************************************ 00:16:49.657 10:21:56 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:49.915 * Looking for test storage... 00:16:49.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:49.915 10:21:56 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:49.915 10:21:56 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:49.915 10:21:56 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:49.915 10:21:56 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:49.915 10:21:56 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.915 10:21:56 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.915 10:21:56 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.915 10:21:56 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:49.916 10:21:56 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.916 10:21:57 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:49.916 10:21:57 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:49.916 10:21:57 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.916 10:21:57 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:49.916 10:21:57 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.916 10:21:57 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.916 10:21:57 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.916 10:21:57 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:49.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.916 --rc genhtml_branch_coverage=1 00:16:49.916 --rc genhtml_function_coverage=1 00:16:49.916 --rc genhtml_legend=1 00:16:49.916 --rc geninfo_all_blocks=1 00:16:49.916 --rc geninfo_unexecuted_blocks=1 00:16:49.916 00:16:49.916 ' 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:49.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.916 --rc genhtml_branch_coverage=1 00:16:49.916 --rc genhtml_function_coverage=1 00:16:49.916 --rc genhtml_legend=1 00:16:49.916 --rc geninfo_all_blocks=1 00:16:49.916 --rc geninfo_unexecuted_blocks=1 00:16:49.916 00:16:49.916 ' 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:49.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.916 --rc genhtml_branch_coverage=1 00:16:49.916 --rc genhtml_function_coverage=1 00:16:49.916 --rc genhtml_legend=1 00:16:49.916 --rc geninfo_all_blocks=1 00:16:49.916 --rc geninfo_unexecuted_blocks=1 00:16:49.916 00:16:49.916 ' 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:49.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.916 --rc genhtml_branch_coverage=1 00:16:49.916 --rc genhtml_function_coverage=1 00:16:49.916 --rc genhtml_legend=1 00:16:49.916 --rc geninfo_all_blocks=1 00:16:49.916 --rc geninfo_unexecuted_blocks=1 00:16:49.916 00:16:49.916 ' 00:16:49.916 10:21:57 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:49.916 10:21:57 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:49.916 10:21:57 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:49.916 10:21:57 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:50.177 10:21:57 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:50.177 10:21:57 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:50.177 10:21:57 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:50.177 #define SPDK_CONFIG_H 00:16:50.177 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:50.177 #define SPDK_CONFIG_APPS 1 00:16:50.177 #define SPDK_CONFIG_ARCH native 00:16:50.177 #define SPDK_CONFIG_ASAN 1 00:16:50.177 #undef SPDK_CONFIG_AVAHI 00:16:50.177 #undef SPDK_CONFIG_CET 00:16:50.177 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:50.177 #define SPDK_CONFIG_COVERAGE 1 00:16:50.177 #define SPDK_CONFIG_CROSS_PREFIX 00:16:50.177 #undef SPDK_CONFIG_CRYPTO 00:16:50.177 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:50.177 #undef SPDK_CONFIG_CUSTOMOCF 00:16:50.177 #undef SPDK_CONFIG_DAOS 00:16:50.177 #define SPDK_CONFIG_DAOS_DIR 00:16:50.177 #define SPDK_CONFIG_DEBUG 1 00:16:50.177 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:50.177 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:50.177 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:50.177 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:50.177 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:50.177 #undef SPDK_CONFIG_DPDK_UADK 00:16:50.178 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:50.178 #define SPDK_CONFIG_EXAMPLES 1 00:16:50.178 #undef SPDK_CONFIG_FC 00:16:50.178 #define SPDK_CONFIG_FC_PATH 00:16:50.178 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:50.178 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:50.178 #define SPDK_CONFIG_FSDEV 1 00:16:50.178 #undef SPDK_CONFIG_FUSE 00:16:50.178 #undef SPDK_CONFIG_FUZZER 00:16:50.178 #define SPDK_CONFIG_FUZZER_LIB 00:16:50.178 #undef SPDK_CONFIG_GOLANG 00:16:50.178 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:50.178 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:50.178 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:50.178 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:50.178 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:50.178 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:50.178 #undef SPDK_CONFIG_HAVE_LZ4 00:16:50.178 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:50.178 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:50.178 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:50.178 #define SPDK_CONFIG_IDXD 1 00:16:50.178 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:50.178 #undef SPDK_CONFIG_IPSEC_MB 00:16:50.178 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:50.178 #define SPDK_CONFIG_ISAL 1 00:16:50.178 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:50.178 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:50.178 #define SPDK_CONFIG_LIBDIR 00:16:50.178 #undef SPDK_CONFIG_LTO 00:16:50.178 #define SPDK_CONFIG_MAX_LCORES 128 00:16:50.178 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:50.178 #define SPDK_CONFIG_NVME_CUSE 1 00:16:50.178 #undef SPDK_CONFIG_OCF 00:16:50.178 #define SPDK_CONFIG_OCF_PATH 00:16:50.178 #define SPDK_CONFIG_OPENSSL_PATH 00:16:50.178 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:50.178 #define SPDK_CONFIG_PGO_DIR 00:16:50.178 #undef SPDK_CONFIG_PGO_USE 00:16:50.178 #define SPDK_CONFIG_PREFIX /usr/local 00:16:50.178 #undef SPDK_CONFIG_RAID5F 00:16:50.178 #undef SPDK_CONFIG_RBD 00:16:50.178 #define SPDK_CONFIG_RDMA 1 00:16:50.178 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:50.178 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:50.178 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:50.178 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:50.178 #define SPDK_CONFIG_SHARED 1 00:16:50.178 #undef SPDK_CONFIG_SMA 00:16:50.178 #define SPDK_CONFIG_TESTS 1 00:16:50.178 #undef SPDK_CONFIG_TSAN 00:16:50.178 #define SPDK_CONFIG_UBLK 1 00:16:50.178 #define SPDK_CONFIG_UBSAN 1 00:16:50.178 #undef SPDK_CONFIG_UNIT_TESTS 00:16:50.178 #undef SPDK_CONFIG_URING 00:16:50.178 #define SPDK_CONFIG_URING_PATH 00:16:50.178 #undef SPDK_CONFIG_URING_ZNS 00:16:50.178 #undef SPDK_CONFIG_USDT 00:16:50.178 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:50.178 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:50.178 #undef SPDK_CONFIG_VFIO_USER 00:16:50.178 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:50.178 #define SPDK_CONFIG_VHOST 1 00:16:50.178 #define SPDK_CONFIG_VIRTIO 1 00:16:50.178 #undef SPDK_CONFIG_VTUNE 00:16:50.178 #define SPDK_CONFIG_VTUNE_DIR 00:16:50.178 #define SPDK_CONFIG_WERROR 1 00:16:50.178 #define SPDK_CONFIG_WPDK_DIR 00:16:50.178 #define SPDK_CONFIG_XNVME 1 00:16:50.178 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:50.178 10:21:57 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.178 10:21:57 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.178 10:21:57 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.178 10:21:57 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.178 10:21:57 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.178 10:21:57 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.178 10:21:57 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.178 10:21:57 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.178 10:21:57 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:50.178 10:21:57 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:50.178 10:21:57 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:50.178 10:21:57 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:50.179 10:21:57 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 69955 ]] 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 69955 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.eRjYIm 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.eRjYIm/tests/xnvme /tmp/spdk.eRjYIm 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13983649792 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5584007168 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261665792 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13983649792 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5584007168 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91989020672 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=7713759232 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:50.180 * Looking for test storage... 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13983649792 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:50.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:50.180 10:21:57 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:50.181 10:21:57 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:50.440 10:21:57 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:50.440 10:21:57 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.440 10:21:57 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:50.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.440 --rc genhtml_branch_coverage=1 00:16:50.440 --rc genhtml_function_coverage=1 00:16:50.440 --rc genhtml_legend=1 00:16:50.440 --rc geninfo_all_blocks=1 00:16:50.440 --rc geninfo_unexecuted_blocks=1 00:16:50.440 00:16:50.440 ' 00:16:50.440 10:21:57 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:50.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.440 --rc genhtml_branch_coverage=1 00:16:50.440 --rc genhtml_function_coverage=1 00:16:50.440 --rc genhtml_legend=1 00:16:50.440 --rc geninfo_all_blocks=1 00:16:50.440 --rc geninfo_unexecuted_blocks=1 00:16:50.440 00:16:50.440 ' 00:16:50.440 10:21:57 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:50.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.440 --rc genhtml_branch_coverage=1 00:16:50.440 --rc genhtml_function_coverage=1 00:16:50.440 --rc genhtml_legend=1 00:16:50.440 --rc geninfo_all_blocks=1 00:16:50.440 --rc geninfo_unexecuted_blocks=1 00:16:50.440 00:16:50.440 ' 00:16:50.440 10:21:57 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:50.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.440 --rc genhtml_branch_coverage=1 00:16:50.440 --rc genhtml_function_coverage=1 00:16:50.440 --rc genhtml_legend=1 00:16:50.440 --rc geninfo_all_blocks=1 00:16:50.440 --rc geninfo_unexecuted_blocks=1 00:16:50.440 00:16:50.440 ' 00:16:50.440 10:21:57 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.440 10:21:57 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.440 10:21:57 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.440 10:21:57 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.440 10:21:57 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.440 10:21:57 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:50.440 10:21:57 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:50.440 10:21:57 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:50.441 10:21:57 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:50.441 10:21:57 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:50.441 10:21:57 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:50.441 10:21:57 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:51.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:51.262 Waiting for block devices as requested 00:16:51.262 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:51.262 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:51.519 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:51.519 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:56.783 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:56.783 10:22:03 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:16:57.041 10:22:04 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:16:57.041 10:22:04 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:16:57.301 10:22:04 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:57.301 10:22:04 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:57.301 No valid GPT data, bailing 00:16:57.301 10:22:04 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:57.301 10:22:04 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:16:57.301 10:22:04 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:57.301 10:22:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:57.301 10:22:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.301 10:22:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.301 10:22:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.301 ************************************ 00:16:57.301 START TEST xnvme_rpc 00:16:57.301 ************************************ 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70354 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70354 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70354 ']' 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.301 10:22:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.561 [2024-11-25 10:22:04.505585] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:16:57.561 [2024-11-25 10:22:04.505899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70354 ] 00:16:57.820 [2024-11-25 10:22:04.685118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.820 [2024-11-25 10:22:04.796308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.756 xnvme_bdev 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.756 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70354 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70354 ']' 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70354 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70354 00:16:59.015 killing process with pid 70354 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70354' 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70354 00:16:59.015 10:22:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70354 00:17:01.558 00:17:01.558 real 0m3.901s 00:17:01.558 user 0m3.955s 00:17:01.558 sys 0m0.545s 00:17:01.558 10:22:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.558 ************************************ 00:17:01.558 END TEST xnvme_rpc 00:17:01.558 ************************************ 00:17:01.558 10:22:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:01.558 10:22:08 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:01.558 10:22:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:01.558 10:22:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.558 10:22:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:01.558 ************************************ 00:17:01.558 START TEST xnvme_bdevperf 00:17:01.558 ************************************ 00:17:01.558 10:22:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:01.558 10:22:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:01.558 10:22:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:01.558 10:22:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:01.558 10:22:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:01.558 10:22:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:01.558 10:22:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:01.558 10:22:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:01.558 { 00:17:01.558 "subsystems": [ 00:17:01.558 { 00:17:01.558 "subsystem": "bdev", 00:17:01.558 "config": [ 00:17:01.558 { 00:17:01.558 "params": { 00:17:01.558 "io_mechanism": "libaio", 00:17:01.558 "conserve_cpu": false, 00:17:01.558 "filename": "/dev/nvme0n1", 00:17:01.558 "name": "xnvme_bdev" 00:17:01.558 }, 00:17:01.558 "method": "bdev_xnvme_create" 00:17:01.558 }, 00:17:01.558 { 00:17:01.558 "method": "bdev_wait_for_examine" 00:17:01.558 } 00:17:01.558 ] 00:17:01.558 } 00:17:01.558 ] 00:17:01.558 } 00:17:01.558 [2024-11-25 10:22:08.459522] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:17:01.558 [2024-11-25 10:22:08.459647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70434 ] 00:17:01.558 [2024-11-25 10:22:08.639604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.817 [2024-11-25 10:22:08.749430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.075 Running I/O for 5 seconds... 00:17:04.388 44310.00 IOPS, 173.09 MiB/s [2024-11-25T10:22:12.437Z] 44414.00 IOPS, 173.49 MiB/s [2024-11-25T10:22:13.373Z] 44125.33 IOPS, 172.36 MiB/s [2024-11-25T10:22:14.310Z] 44265.50 IOPS, 172.91 MiB/s [2024-11-25T10:22:14.310Z] 43929.40 IOPS, 171.60 MiB/s 00:17:07.198 Latency(us) 00:17:07.198 [2024-11-25T10:22:14.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.198 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:07.198 xnvme_bdev : 5.01 43888.89 171.44 0.00 0.00 1454.78 457.30 7001.03 00:17:07.198 [2024-11-25T10:22:14.310Z] =================================================================================================================== 00:17:07.198 [2024-11-25T10:22:14.310Z] Total : 43888.89 171.44 0.00 0.00 1454.78 457.30 7001.03 00:17:08.576 10:22:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:08.576 10:22:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:08.576 10:22:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:08.576 10:22:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:08.576 10:22:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:08.576 { 00:17:08.576 "subsystems": [ 00:17:08.576 { 00:17:08.576 "subsystem": "bdev", 00:17:08.576 "config": [ 00:17:08.576 { 00:17:08.576 "params": { 00:17:08.576 "io_mechanism": "libaio", 00:17:08.576 "conserve_cpu": false, 00:17:08.576 "filename": "/dev/nvme0n1", 00:17:08.576 "name": "xnvme_bdev" 00:17:08.576 }, 00:17:08.576 "method": "bdev_xnvme_create" 00:17:08.576 }, 00:17:08.576 { 00:17:08.576 "method": "bdev_wait_for_examine" 00:17:08.576 } 00:17:08.576 ] 00:17:08.576 } 00:17:08.576 ] 00:17:08.576 } 00:17:08.576 [2024-11-25 10:22:15.378545] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:17:08.576 [2024-11-25 10:22:15.378669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70509 ] 00:17:08.576 [2024-11-25 10:22:15.558846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.834 [2024-11-25 10:22:15.696815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.093 Running I/O for 5 seconds... 00:17:11.409 36325.00 IOPS, 141.89 MiB/s [2024-11-25T10:22:19.456Z] 34335.50 IOPS, 134.12 MiB/s [2024-11-25T10:22:20.392Z] 33974.67 IOPS, 132.71 MiB/s [2024-11-25T10:22:21.328Z] 34634.50 IOPS, 135.29 MiB/s 00:17:14.216 Latency(us) 00:17:14.216 [2024-11-25T10:22:21.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.216 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:14.216 xnvme_bdev : 5.00 35512.42 138.72 0.00 0.00 1797.93 88.83 26740.79 00:17:14.216 [2024-11-25T10:22:21.328Z] =================================================================================================================== 00:17:14.216 [2024-11-25T10:22:21.328Z] Total : 35512.42 138.72 0.00 0.00 1797.93 88.83 26740.79 00:17:15.151 00:17:15.151 real 0m13.872s 00:17:15.151 user 0m5.591s 00:17:15.151 sys 0m5.758s 00:17:15.151 10:22:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.151 10:22:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:15.151 ************************************ 00:17:15.151 END TEST xnvme_bdevperf 00:17:15.151 ************************************ 00:17:15.410 10:22:22 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:15.410 10:22:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:15.410 10:22:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.410 10:22:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.410 ************************************ 00:17:15.410 START TEST xnvme_fio_plugin 00:17:15.410 ************************************ 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:15.410 { 00:17:15.410 "subsystems": [ 00:17:15.410 { 00:17:15.410 "subsystem": "bdev", 00:17:15.410 "config": [ 00:17:15.410 { 00:17:15.410 "params": { 00:17:15.410 "io_mechanism": "libaio", 00:17:15.410 "conserve_cpu": false, 00:17:15.410 "filename": "/dev/nvme0n1", 00:17:15.410 "name": "xnvme_bdev" 00:17:15.410 }, 00:17:15.410 "method": "bdev_xnvme_create" 00:17:15.410 }, 00:17:15.410 { 00:17:15.410 "method": "bdev_wait_for_examine" 00:17:15.410 } 00:17:15.410 ] 00:17:15.410 } 00:17:15.410 ] 00:17:15.410 } 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:15.410 10:22:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:15.670 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:15.670 fio-3.35 00:17:15.670 Starting 1 thread 00:17:22.235 00:17:22.235 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70634: Mon Nov 25 10:22:28 2024 00:17:22.235 read: IOPS=37.3k, BW=146MiB/s (153MB/s)(729MiB/5001msec) 00:17:22.235 slat (usec): min=4, max=1135, avg=22.98, stdev=35.47 00:17:22.235 clat (usec): min=85, max=7104, avg=1024.88, stdev=718.51 00:17:22.235 lat (usec): min=100, max=7187, avg=1047.86, stdev=725.72 00:17:22.235 clat percentiles (usec): 00:17:22.235 | 1.00th=[ 198], 5.00th=[ 289], 10.00th=[ 375], 20.00th=[ 510], 00:17:22.235 | 30.00th=[ 635], 40.00th=[ 758], 50.00th=[ 873], 60.00th=[ 988], 00:17:22.235 | 70.00th=[ 1139], 80.00th=[ 1336], 90.00th=[ 1762], 95.00th=[ 2442], 00:17:22.235 | 99.00th=[ 4047], 99.50th=[ 4490], 99.90th=[ 5211], 99.95th=[ 5538], 00:17:22.235 | 99.99th=[ 6325] 00:17:22.235 bw ( KiB/s): min=112432, max=196368, per=100.00%, avg=150924.44, stdev=23103.99, samples=9 00:17:22.235 iops : min=28108, max=49092, avg=37731.11, stdev=5776.00, samples=9 00:17:22.235 lat (usec) : 100=0.03%, 250=2.91%, 500=16.29%, 750=20.35%, 1000=21.23% 00:17:22.235 lat (msec) : 2=31.65%, 4=6.49%, 10=1.04% 00:17:22.235 cpu : usr=26.94%, sys=53.74%, ctx=67, majf=0, minf=764 00:17:22.235 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=10.5%, 16=25.4%, 32=57.2%, >=64=1.9% 00:17:22.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.235 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:22.235 issued rwts: total=186557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:22.235 00:17:22.235 Run status group 0 (all jobs): 00:17:22.235 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=729MiB (764MB), run=5001-5001msec 00:17:22.802 ----------------------------------------------------- 00:17:22.802 Suppressions used: 00:17:22.802 count bytes template 00:17:22.802 1 11 /usr/src/fio/parse.c 00:17:22.802 1 8 libtcmalloc_minimal.so 00:17:22.802 1 904 libcrypto.so 00:17:22.802 ----------------------------------------------------- 00:17:22.802 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:22.802 { 00:17:22.802 "subsystems": [ 00:17:22.802 { 00:17:22.802 "subsystem": "bdev", 00:17:22.802 "config": [ 00:17:22.802 { 00:17:22.802 "params": { 00:17:22.802 "io_mechanism": "libaio", 00:17:22.802 "conserve_cpu": false, 00:17:22.802 "filename": "/dev/nvme0n1", 00:17:22.802 "name": "xnvme_bdev" 00:17:22.802 }, 00:17:22.802 "method": "bdev_xnvme_create" 00:17:22.802 }, 00:17:22.802 { 00:17:22.802 "method": "bdev_wait_for_examine" 00:17:22.802 } 00:17:22.802 ] 00:17:22.802 } 00:17:22.802 ] 00:17:22.802 } 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:22.802 10:22:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:23.061 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:23.061 fio-3.35 00:17:23.061 Starting 1 thread 00:17:29.627 00:17:29.627 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70738: Mon Nov 25 10:22:35 2024 00:17:29.627 write: IOPS=41.4k, BW=162MiB/s (169MB/s)(808MiB/5001msec); 0 zone resets 00:17:29.627 slat (usec): min=4, max=1119, avg=20.97, stdev=29.52 00:17:29.627 clat (usec): min=90, max=6729, avg=922.26, stdev=628.23 00:17:29.627 lat (usec): min=109, max=6805, avg=943.23, stdev=634.76 00:17:29.627 clat percentiles (usec): 00:17:29.627 | 1.00th=[ 196], 5.00th=[ 285], 10.00th=[ 359], 20.00th=[ 482], 00:17:29.627 | 30.00th=[ 594], 40.00th=[ 701], 50.00th=[ 799], 60.00th=[ 906], 00:17:29.627 | 70.00th=[ 1020], 80.00th=[ 1172], 90.00th=[ 1532], 95.00th=[ 2114], 00:17:29.627 | 99.00th=[ 3654], 99.50th=[ 4080], 99.90th=[ 4752], 99.95th=[ 4948], 00:17:29.627 | 99.99th=[ 5604] 00:17:29.627 bw ( KiB/s): min=129000, max=185077, per=100.00%, avg=169350.78, stdev=19014.24, samples=9 00:17:29.627 iops : min=32250, max=46269, avg=42337.67, stdev=4753.53, samples=9 00:17:29.627 lat (usec) : 100=0.03%, 250=3.10%, 500=18.52%, 750=23.49%, 1000=23.45% 00:17:29.627 lat (msec) : 2=25.73%, 4=5.10%, 10=0.58% 00:17:29.627 cpu : usr=28.34%, sys=51.54%, ctx=77, majf=0, minf=764 00:17:29.627 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=10.6%, 16=25.7%, 32=57.1%, >=64=1.8% 00:17:29.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.627 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:29.627 issued rwts: total=0,206887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:29.627 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:29.627 00:17:29.627 Run status group 0 (all jobs): 00:17:29.627 WRITE: bw=162MiB/s (169MB/s), 162MiB/s-162MiB/s (169MB/s-169MB/s), io=808MiB (847MB), run=5001-5001msec 00:17:30.195 ----------------------------------------------------- 00:17:30.195 Suppressions used: 00:17:30.195 count bytes template 00:17:30.195 1 11 /usr/src/fio/parse.c 00:17:30.195 1 8 libtcmalloc_minimal.so 00:17:30.195 1 904 libcrypto.so 00:17:30.195 ----------------------------------------------------- 00:17:30.195 00:17:30.195 00:17:30.195 real 0m14.900s 00:17:30.195 user 0m6.598s 00:17:30.195 sys 0m6.016s 00:17:30.195 10:22:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.195 10:22:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:30.195 ************************************ 00:17:30.195 END TEST xnvme_fio_plugin 00:17:30.195 ************************************ 00:17:30.195 10:22:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:30.195 10:22:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:30.195 10:22:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:30.195 10:22:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:30.195 10:22:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:30.195 10:22:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.195 10:22:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:30.195 ************************************ 00:17:30.195 START TEST xnvme_rpc 00:17:30.195 ************************************ 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70830 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70830 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70830 ']' 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.195 10:22:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:30.455 [2024-11-25 10:22:37.396456] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:17:30.455 [2024-11-25 10:22:37.396596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70830 ] 00:17:30.714 [2024-11-25 10:22:37.577885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.714 [2024-11-25 10:22:37.700264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.651 xnvme_bdev 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.651 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70830 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70830 ']' 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70830 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70830 00:17:31.916 killing process with pid 70830 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70830' 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70830 00:17:31.916 10:22:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70830 00:17:34.459 ************************************ 00:17:34.459 END TEST xnvme_rpc 00:17:34.459 ************************************ 00:17:34.459 00:17:34.459 real 0m3.929s 00:17:34.459 user 0m4.034s 00:17:34.459 sys 0m0.517s 00:17:34.459 10:22:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.459 10:22:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.459 10:22:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:34.459 10:22:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:34.459 10:22:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.459 10:22:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:34.459 ************************************ 00:17:34.459 START TEST xnvme_bdevperf 00:17:34.459 ************************************ 00:17:34.459 10:22:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:34.459 10:22:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:34.459 10:22:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:34.459 10:22:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:34.459 10:22:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:34.459 10:22:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:34.459 10:22:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:34.459 10:22:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:34.459 { 00:17:34.459 "subsystems": [ 00:17:34.459 { 00:17:34.459 "subsystem": "bdev", 00:17:34.459 "config": [ 00:17:34.459 { 00:17:34.459 "params": { 00:17:34.459 "io_mechanism": "libaio", 00:17:34.459 "conserve_cpu": true, 00:17:34.459 "filename": "/dev/nvme0n1", 00:17:34.459 "name": "xnvme_bdev" 00:17:34.459 }, 00:17:34.459 "method": "bdev_xnvme_create" 00:17:34.459 }, 00:17:34.460 { 00:17:34.460 "method": "bdev_wait_for_examine" 00:17:34.460 } 00:17:34.460 ] 00:17:34.460 } 00:17:34.460 ] 00:17:34.460 } 00:17:34.460 [2024-11-25 10:22:41.415860] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:17:34.460 [2024-11-25 10:22:41.416539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70906 ] 00:17:34.718 [2024-11-25 10:22:41.614823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.718 [2024-11-25 10:22:41.725942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.977 Running I/O for 5 seconds... 00:17:37.286 42702.00 IOPS, 166.80 MiB/s [2024-11-25T10:22:45.334Z] 42315.00 IOPS, 165.29 MiB/s [2024-11-25T10:22:46.272Z] 42935.00 IOPS, 167.71 MiB/s [2024-11-25T10:22:47.208Z] 43218.00 IOPS, 168.82 MiB/s 00:17:40.096 Latency(us) 00:17:40.096 [2024-11-25T10:22:47.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.097 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:40.097 xnvme_bdev : 5.00 43393.79 169.51 0.00 0.00 1471.11 139.00 30951.94 00:17:40.097 [2024-11-25T10:22:47.209Z] =================================================================================================================== 00:17:40.097 [2024-11-25T10:22:47.209Z] Total : 43393.79 169.51 0.00 0.00 1471.11 139.00 30951.94 00:17:41.474 10:22:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:41.474 10:22:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:41.474 10:22:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:41.474 10:22:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:41.474 10:22:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:41.474 { 00:17:41.474 "subsystems": [ 00:17:41.474 { 00:17:41.474 "subsystem": "bdev", 00:17:41.474 "config": [ 00:17:41.474 { 00:17:41.474 "params": { 00:17:41.474 "io_mechanism": "libaio", 00:17:41.474 "conserve_cpu": true, 00:17:41.474 "filename": "/dev/nvme0n1", 00:17:41.474 "name": "xnvme_bdev" 00:17:41.474 }, 00:17:41.474 "method": "bdev_xnvme_create" 00:17:41.474 }, 00:17:41.474 { 00:17:41.474 "method": "bdev_wait_for_examine" 00:17:41.474 } 00:17:41.474 ] 00:17:41.474 } 00:17:41.474 ] 00:17:41.474 } 00:17:41.474 [2024-11-25 10:22:48.285739] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:17:41.474 [2024-11-25 10:22:48.286004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70991 ] 00:17:41.474 [2024-11-25 10:22:48.464085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.474 [2024-11-25 10:22:48.581790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.041 Running I/O for 5 seconds... 00:17:43.912 44003.00 IOPS, 171.89 MiB/s [2024-11-25T10:22:51.960Z] 38065.00 IOPS, 148.69 MiB/s [2024-11-25T10:22:53.338Z] 35571.00 IOPS, 138.95 MiB/s [2024-11-25T10:22:54.276Z] 36501.75 IOPS, 142.58 MiB/s 00:17:47.164 Latency(us) 00:17:47.164 [2024-11-25T10:22:54.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.164 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:47.164 xnvme_bdev : 5.00 37301.38 145.71 0.00 0.00 1711.64 78.96 17686.82 00:17:47.164 [2024-11-25T10:22:54.276Z] =================================================================================================================== 00:17:47.164 [2024-11-25T10:22:54.276Z] Total : 37301.38 145.71 0.00 0.00 1711.64 78.96 17686.82 00:17:48.103 00:17:48.103 real 0m13.771s 00:17:48.103 user 0m5.312s 00:17:48.103 sys 0m5.929s 00:17:48.103 ************************************ 00:17:48.103 END TEST xnvme_bdevperf 00:17:48.103 ************************************ 00:17:48.103 10:22:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.103 10:22:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:48.103 10:22:55 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:48.103 10:22:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:48.103 10:22:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.103 10:22:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:48.103 ************************************ 00:17:48.103 START TEST xnvme_fio_plugin 00:17:48.103 ************************************ 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:48.103 10:22:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:48.103 { 00:17:48.103 "subsystems": [ 00:17:48.103 { 00:17:48.103 "subsystem": "bdev", 00:17:48.103 "config": [ 00:17:48.103 { 00:17:48.103 "params": { 00:17:48.103 "io_mechanism": "libaio", 00:17:48.103 "conserve_cpu": true, 00:17:48.103 "filename": "/dev/nvme0n1", 00:17:48.103 "name": "xnvme_bdev" 00:17:48.103 }, 00:17:48.103 "method": "bdev_xnvme_create" 00:17:48.103 }, 00:17:48.103 { 00:17:48.103 "method": "bdev_wait_for_examine" 00:17:48.103 } 00:17:48.103 ] 00:17:48.103 } 00:17:48.103 ] 00:17:48.103 } 00:17:48.362 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:48.362 fio-3.35 00:17:48.362 Starting 1 thread 00:17:54.963 00:17:54.963 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71112: Mon Nov 25 10:23:01 2024 00:17:54.963 read: IOPS=48.4k, BW=189MiB/s (198MB/s)(946MiB/5001msec) 00:17:54.963 slat (usec): min=4, max=952, avg=17.64, stdev=27.43 00:17:54.963 clat (usec): min=90, max=5905, avg=810.79, stdev=518.27 00:17:54.963 lat (usec): min=141, max=5974, avg=828.43, stdev=522.62 00:17:54.963 clat percentiles (usec): 00:17:54.963 | 1.00th=[ 186], 5.00th=[ 269], 10.00th=[ 343], 20.00th=[ 457], 00:17:54.963 | 30.00th=[ 553], 40.00th=[ 644], 50.00th=[ 734], 60.00th=[ 816], 00:17:54.963 | 70.00th=[ 906], 80.00th=[ 1020], 90.00th=[ 1221], 95.00th=[ 1582], 00:17:54.963 | 99.00th=[ 3195], 99.50th=[ 3687], 99.90th=[ 4490], 99.95th=[ 4752], 00:17:54.963 | 99.99th=[ 5145] 00:17:54.963 bw ( KiB/s): min=173176, max=217352, per=100.00%, avg=194248.89, stdev=13749.63, samples=9 00:17:54.963 iops : min=43294, max=54338, avg=48562.22, stdev=3437.41, samples=9 00:17:54.963 lat (usec) : 100=0.04%, 250=3.90%, 500=20.33%, 750=27.97%, 1000=26.05% 00:17:54.963 lat (msec) : 2=18.37%, 4=3.04%, 10=0.30% 00:17:54.963 cpu : usr=29.54%, sys=52.18%, ctx=42, majf=0, minf=764 00:17:54.963 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=10.0%, 16=25.1%, 32=58.5%, >=64=1.9% 00:17:54.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.963 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:54.963 issued rwts: total=242226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.963 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:54.963 00:17:54.963 Run status group 0 (all jobs): 00:17:54.963 READ: bw=189MiB/s (198MB/s), 189MiB/s-189MiB/s (198MB/s-198MB/s), io=946MiB (992MB), run=5001-5001msec 00:17:55.541 ----------------------------------------------------- 00:17:55.541 Suppressions used: 00:17:55.541 count bytes template 00:17:55.541 1 11 /usr/src/fio/parse.c 00:17:55.541 1 8 libtcmalloc_minimal.so 00:17:55.541 1 904 libcrypto.so 00:17:55.541 ----------------------------------------------------- 00:17:55.541 00:17:55.541 10:23:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:55.541 10:23:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:55.541 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:55.541 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:55.541 10:23:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:55.541 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:55.541 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:55.541 10:23:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:55.541 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:55.542 10:23:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:55.542 { 00:17:55.542 "subsystems": [ 00:17:55.542 { 00:17:55.542 "subsystem": "bdev", 00:17:55.542 "config": [ 00:17:55.542 { 00:17:55.542 "params": { 00:17:55.542 "io_mechanism": "libaio", 00:17:55.542 "conserve_cpu": true, 00:17:55.542 "filename": "/dev/nvme0n1", 00:17:55.542 "name": "xnvme_bdev" 00:17:55.542 }, 00:17:55.542 "method": "bdev_xnvme_create" 00:17:55.542 }, 00:17:55.542 { 00:17:55.542 "method": "bdev_wait_for_examine" 00:17:55.542 } 00:17:55.542 ] 00:17:55.542 } 00:17:55.542 ] 00:17:55.542 } 00:17:55.800 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:55.800 fio-3.35 00:17:55.800 Starting 1 thread 00:18:02.373 00:18:02.373 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71209: Mon Nov 25 10:23:08 2024 00:18:02.374 write: IOPS=45.3k, BW=177MiB/s (186MB/s)(886MiB/5001msec); 0 zone resets 00:18:02.374 slat (usec): min=4, max=2752, avg=18.60, stdev=29.69 00:18:02.374 clat (usec): min=11, max=7858, avg=871.33, stdev=614.57 00:18:02.374 lat (usec): min=46, max=7883, avg=889.93, stdev=619.01 00:18:02.374 clat percentiles (usec): 00:18:02.374 | 1.00th=[ 186], 5.00th=[ 273], 10.00th=[ 347], 20.00th=[ 465], 00:18:02.374 | 30.00th=[ 562], 40.00th=[ 660], 50.00th=[ 750], 60.00th=[ 848], 00:18:02.374 | 70.00th=[ 947], 80.00th=[ 1090], 90.00th=[ 1418], 95.00th=[ 1958], 00:18:02.374 | 99.00th=[ 3589], 99.50th=[ 4146], 99.90th=[ 5080], 99.95th=[ 5538], 00:18:02.374 | 99.99th=[ 7177] 00:18:02.374 bw ( KiB/s): min=159504, max=197280, per=100.00%, avg=181877.33, stdev=11878.27, samples=9 00:18:02.374 iops : min=39876, max=49320, avg=45469.33, stdev=2969.57, samples=9 00:18:02.374 lat (usec) : 20=0.01%, 50=0.01%, 100=0.07%, 250=3.48%, 500=19.95% 00:18:02.374 lat (usec) : 750=26.39%, 1000=24.13% 00:18:02.374 lat (msec) : 2=21.18%, 4=4.18%, 10=0.62% 00:18:02.374 cpu : usr=30.62%, sys=50.34%, ctx=155, majf=0, minf=764 00:18:02.374 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=9.7%, 16=24.9%, 32=59.0%, >=64=2.0% 00:18:02.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.374 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:18:02.374 issued rwts: total=0,226693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.374 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:02.374 00:18:02.374 Run status group 0 (all jobs): 00:18:02.374 WRITE: bw=177MiB/s (186MB/s), 177MiB/s-177MiB/s (186MB/s-186MB/s), io=886MiB (929MB), run=5001-5001msec 00:18:02.940 ----------------------------------------------------- 00:18:02.940 Suppressions used: 00:18:02.940 count bytes template 00:18:02.940 1 11 /usr/src/fio/parse.c 00:18:02.940 1 8 libtcmalloc_minimal.so 00:18:02.940 1 904 libcrypto.so 00:18:02.940 ----------------------------------------------------- 00:18:02.940 00:18:02.940 00:18:02.940 real 0m14.746s 00:18:02.940 user 0m6.656s 00:18:02.940 sys 0m5.897s 00:18:02.940 10:23:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.940 ************************************ 00:18:02.940 END TEST xnvme_fio_plugin 00:18:02.940 ************************************ 00:18:02.940 10:23:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:02.940 10:23:09 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:02.940 10:23:09 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:18:02.940 10:23:09 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:18:02.940 10:23:09 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:18:02.940 10:23:09 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:02.940 10:23:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:02.940 10:23:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:02.940 10:23:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:02.940 10:23:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:02.940 10:23:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:02.940 10:23:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.940 10:23:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:02.940 ************************************ 00:18:02.940 START TEST xnvme_rpc 00:18:02.940 ************************************ 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71295 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71295 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71295 ']' 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.940 10:23:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.200 [2024-11-25 10:23:10.063142] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:18:03.200 [2024-11-25 10:23:10.063268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71295 ] 00:18:03.200 [2024-11-25 10:23:10.241895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.459 [2024-11-25 10:23:10.357585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.396 xnvme_bdev 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71295 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71295 ']' 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71295 00:18:04.396 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:04.397 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.397 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71295 00:18:04.397 killing process with pid 71295 00:18:04.397 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.397 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.397 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71295' 00:18:04.397 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71295 00:18:04.397 10:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71295 00:18:06.934 00:18:06.934 real 0m3.903s 00:18:06.934 user 0m3.992s 00:18:06.934 sys 0m0.528s 00:18:06.934 ************************************ 00:18:06.934 END TEST xnvme_rpc 00:18:06.934 ************************************ 00:18:06.934 10:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.934 10:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.934 10:23:13 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:06.934 10:23:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:06.934 10:23:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.934 10:23:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:06.934 ************************************ 00:18:06.934 START TEST xnvme_bdevperf 00:18:06.934 ************************************ 00:18:06.934 10:23:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:06.934 10:23:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:06.934 10:23:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:06.934 10:23:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:06.934 10:23:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:06.934 10:23:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:06.934 10:23:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:06.934 10:23:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:06.934 { 00:18:06.934 "subsystems": [ 00:18:06.934 { 00:18:06.934 "subsystem": "bdev", 00:18:06.934 "config": [ 00:18:06.934 { 00:18:06.934 "params": { 00:18:06.934 "io_mechanism": "io_uring", 00:18:06.934 "conserve_cpu": false, 00:18:06.934 "filename": "/dev/nvme0n1", 00:18:06.934 "name": "xnvme_bdev" 00:18:06.934 }, 00:18:06.934 "method": "bdev_xnvme_create" 00:18:06.934 }, 00:18:06.934 { 00:18:06.934 "method": "bdev_wait_for_examine" 00:18:06.934 } 00:18:06.934 ] 00:18:06.934 } 00:18:06.934 ] 00:18:06.934 } 00:18:06.934 [2024-11-25 10:23:14.019714] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:18:06.934 [2024-11-25 10:23:14.019837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71375 ] 00:18:07.193 [2024-11-25 10:23:14.205126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.452 [2024-11-25 10:23:14.318512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.708 Running I/O for 5 seconds... 00:18:09.577 51721.00 IOPS, 202.04 MiB/s [2024-11-25T10:23:18.065Z] 50963.00 IOPS, 199.07 MiB/s [2024-11-25T10:23:19.001Z] 49623.00 IOPS, 193.84 MiB/s [2024-11-25T10:23:19.937Z] 48411.50 IOPS, 189.11 MiB/s 00:18:12.825 Latency(us) 00:18:12.825 [2024-11-25T10:23:19.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.825 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:12.825 xnvme_bdev : 5.00 49463.24 193.22 0.00 0.00 1290.40 348.74 9475.08 00:18:12.825 [2024-11-25T10:23:19.937Z] =================================================================================================================== 00:18:12.825 [2024-11-25T10:23:19.937Z] Total : 49463.24 193.22 0.00 0.00 1290.40 348.74 9475.08 00:18:13.776 10:23:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:13.776 10:23:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:13.776 10:23:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:13.776 10:23:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:13.776 10:23:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:13.776 { 00:18:13.776 "subsystems": [ 00:18:13.776 { 00:18:13.776 "subsystem": "bdev", 00:18:13.776 "config": [ 00:18:13.776 { 00:18:13.776 "params": { 00:18:13.776 "io_mechanism": "io_uring", 00:18:13.776 "conserve_cpu": false, 00:18:13.776 "filename": "/dev/nvme0n1", 00:18:13.776 "name": "xnvme_bdev" 00:18:13.776 }, 00:18:13.776 "method": "bdev_xnvme_create" 00:18:13.776 }, 00:18:13.776 { 00:18:13.776 "method": "bdev_wait_for_examine" 00:18:13.776 } 00:18:13.776 ] 00:18:13.776 } 00:18:13.776 ] 00:18:13.776 } 00:18:14.070 [2024-11-25 10:23:20.910078] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:18:14.070 [2024-11-25 10:23:20.910273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71460 ] 00:18:14.070 [2024-11-25 10:23:21.110202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.328 [2024-11-25 10:23:21.230988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.587 Running I/O for 5 seconds... 00:18:16.899 32900.00 IOPS, 128.52 MiB/s [2024-11-25T10:23:24.947Z] 30754.00 IOPS, 120.13 MiB/s [2024-11-25T10:23:25.915Z] 30273.33 IOPS, 118.26 MiB/s [2024-11-25T10:23:26.851Z] 30321.00 IOPS, 118.44 MiB/s 00:18:19.739 Latency(us) 00:18:19.739 [2024-11-25T10:23:26.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.739 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:19.739 xnvme_bdev : 5.00 30293.08 118.33 0.00 0.00 2106.14 608.64 5790.33 00:18:19.739 [2024-11-25T10:23:26.851Z] =================================================================================================================== 00:18:19.739 [2024-11-25T10:23:26.851Z] Total : 30293.08 118.33 0.00 0.00 2106.14 608.64 5790.33 00:18:20.676 00:18:20.676 real 0m13.806s 00:18:20.676 user 0m6.430s 00:18:20.676 sys 0m7.159s 00:18:20.676 10:23:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.676 ************************************ 00:18:20.676 END TEST xnvme_bdevperf 00:18:20.676 ************************************ 00:18:20.676 10:23:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:20.676 10:23:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:20.676 10:23:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:20.676 10:23:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.676 10:23:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.934 ************************************ 00:18:20.934 START TEST xnvme_fio_plugin 00:18:20.934 ************************************ 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:20.934 { 00:18:20.934 "subsystems": [ 00:18:20.934 { 00:18:20.934 "subsystem": "bdev", 00:18:20.934 "config": [ 00:18:20.934 { 00:18:20.934 "params": { 00:18:20.934 "io_mechanism": "io_uring", 00:18:20.934 "conserve_cpu": false, 00:18:20.934 "filename": "/dev/nvme0n1", 00:18:20.934 "name": "xnvme_bdev" 00:18:20.934 }, 00:18:20.934 "method": "bdev_xnvme_create" 00:18:20.934 }, 00:18:20.934 { 00:18:20.934 "method": "bdev_wait_for_examine" 00:18:20.934 } 00:18:20.934 ] 00:18:20.934 } 00:18:20.934 ] 00:18:20.934 } 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:20.934 10:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:21.192 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:21.192 fio-3.35 00:18:21.192 Starting 1 thread 00:18:27.752 00:18:27.752 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71585: Mon Nov 25 10:23:33 2024 00:18:27.752 read: IOPS=34.0k, BW=133MiB/s (139MB/s)(664MiB/5002msec) 00:18:27.752 slat (nsec): min=2354, max=51519, avg=4820.77, stdev=1893.81 00:18:27.752 clat (usec): min=988, max=4258, avg=1688.60, stdev=349.73 00:18:27.752 lat (usec): min=991, max=4270, avg=1693.42, stdev=350.87 00:18:27.752 clat percentiles (usec): 00:18:27.752 | 1.00th=[ 1106], 5.00th=[ 1205], 10.00th=[ 1287], 20.00th=[ 1401], 00:18:27.752 | 30.00th=[ 1500], 40.00th=[ 1565], 50.00th=[ 1631], 60.00th=[ 1713], 00:18:27.752 | 70.00th=[ 1795], 80.00th=[ 1942], 90.00th=[ 2212], 95.00th=[ 2376], 00:18:27.752 | 99.00th=[ 2671], 99.50th=[ 2769], 99.90th=[ 3097], 99.95th=[ 3261], 00:18:27.752 | 99.99th=[ 4113] 00:18:27.752 bw ( KiB/s): min=102400, max=162304, per=100.00%, avg=137500.44, stdev=21183.30, samples=9 00:18:27.752 iops : min=25600, max=40576, avg=34375.11, stdev=5295.83, samples=9 00:18:27.752 lat (usec) : 1000=0.01% 00:18:27.752 lat (msec) : 2=82.54%, 4=17.44%, 10=0.02% 00:18:27.752 cpu : usr=31.83%, sys=67.21%, ctx=12, majf=0, minf=762 00:18:27.752 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:27.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.752 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:27.752 issued rwts: total=170046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:27.752 00:18:27.752 Run status group 0 (all jobs): 00:18:27.752 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=664MiB (697MB), run=5002-5002msec 00:18:28.321 ----------------------------------------------------- 00:18:28.321 Suppressions used: 00:18:28.321 count bytes template 00:18:28.321 1 11 /usr/src/fio/parse.c 00:18:28.321 1 8 libtcmalloc_minimal.so 00:18:28.321 1 904 libcrypto.so 00:18:28.321 ----------------------------------------------------- 00:18:28.321 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:28.321 10:23:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:28.321 { 00:18:28.321 "subsystems": [ 00:18:28.321 { 00:18:28.321 "subsystem": "bdev", 00:18:28.321 "config": [ 00:18:28.321 { 00:18:28.321 "params": { 00:18:28.321 "io_mechanism": "io_uring", 00:18:28.321 "conserve_cpu": false, 00:18:28.321 "filename": "/dev/nvme0n1", 00:18:28.321 "name": "xnvme_bdev" 00:18:28.321 }, 00:18:28.321 "method": "bdev_xnvme_create" 00:18:28.321 }, 00:18:28.321 { 00:18:28.321 "method": "bdev_wait_for_examine" 00:18:28.321 } 00:18:28.321 ] 00:18:28.321 } 00:18:28.321 ] 00:18:28.321 } 00:18:28.321 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:28.321 fio-3.35 00:18:28.321 Starting 1 thread 00:18:34.887 00:18:34.887 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71678: Mon Nov 25 10:23:41 2024 00:18:34.887 write: IOPS=35.7k, BW=139MiB/s (146MB/s)(697MiB/5001msec); 0 zone resets 00:18:34.887 slat (nsec): min=3062, max=67742, avg=4726.41, stdev=1838.35 00:18:34.887 clat (usec): min=878, max=3037, avg=1605.32, stdev=289.78 00:18:34.887 lat (usec): min=882, max=3049, avg=1610.05, stdev=290.74 00:18:34.887 clat percentiles (usec): 00:18:34.887 | 1.00th=[ 1172], 5.00th=[ 1270], 10.00th=[ 1319], 20.00th=[ 1385], 00:18:34.887 | 30.00th=[ 1434], 40.00th=[ 1483], 50.00th=[ 1532], 60.00th=[ 1582], 00:18:34.887 | 70.00th=[ 1647], 80.00th=[ 1795], 90.00th=[ 2057], 95.00th=[ 2245], 00:18:34.887 | 99.00th=[ 2507], 99.50th=[ 2573], 99.90th=[ 2737], 99.95th=[ 2769], 00:18:34.887 | 99.99th=[ 2900] 00:18:34.887 bw ( KiB/s): min=103424, max=161792, per=100.00%, avg=142848.00, stdev=15907.06, samples=9 00:18:34.887 iops : min=25856, max=40448, avg=35712.00, stdev=3976.76, samples=9 00:18:34.887 lat (usec) : 1000=0.09% 00:18:34.887 lat (msec) : 2=88.07%, 4=11.84% 00:18:34.887 cpu : usr=32.06%, sys=66.94%, ctx=31, majf=0, minf=762 00:18:34.887 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:34.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.887 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:34.887 issued rwts: total=0,178368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:34.887 00:18:34.887 Run status group 0 (all jobs): 00:18:34.887 WRITE: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=697MiB (731MB), run=5001-5001msec 00:18:35.451 ----------------------------------------------------- 00:18:35.451 Suppressions used: 00:18:35.451 count bytes template 00:18:35.451 1 11 /usr/src/fio/parse.c 00:18:35.451 1 8 libtcmalloc_minimal.so 00:18:35.451 1 904 libcrypto.so 00:18:35.451 ----------------------------------------------------- 00:18:35.451 00:18:35.451 00:18:35.451 real 0m14.678s 00:18:35.451 user 0m6.857s 00:18:35.451 sys 0m7.457s 00:18:35.451 10:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.451 ************************************ 00:18:35.451 END TEST xnvme_fio_plugin 00:18:35.451 ************************************ 00:18:35.451 10:23:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:35.451 10:23:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:35.451 10:23:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:35.451 10:23:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:35.451 10:23:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:35.451 10:23:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:35.451 10:23:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.451 10:23:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:35.451 ************************************ 00:18:35.451 START TEST xnvme_rpc 00:18:35.451 ************************************ 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71765 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71765 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71765 ']' 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.451 10:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:35.709 [2024-11-25 10:23:42.667247] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:18:35.709 [2024-11-25 10:23:42.667382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71765 ] 00:18:35.972 [2024-11-25 10:23:42.833610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.972 [2024-11-25 10:23:42.973559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.905 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.905 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:36.905 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.906 xnvme_bdev 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:36.906 10:23:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.906 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.906 10:23:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:36.906 10:23:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:36.906 10:23:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:36.906 10:23:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:36.906 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.906 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71765 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71765 ']' 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71765 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71765 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.165 killing process with pid 71765 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71765' 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71765 00:18:37.165 10:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71765 00:18:39.695 00:18:39.695 real 0m3.975s 00:18:39.695 user 0m4.078s 00:18:39.695 sys 0m0.544s 00:18:39.695 10:23:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.695 10:23:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.695 ************************************ 00:18:39.695 END TEST xnvme_rpc 00:18:39.695 ************************************ 00:18:39.695 10:23:46 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:39.695 10:23:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:39.695 10:23:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.695 10:23:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:39.695 ************************************ 00:18:39.695 START TEST xnvme_bdevperf 00:18:39.695 ************************************ 00:18:39.695 10:23:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:39.695 10:23:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:39.695 10:23:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:39.695 10:23:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:39.695 10:23:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:39.695 10:23:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:39.695 10:23:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:39.695 10:23:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:39.695 { 00:18:39.695 "subsystems": [ 00:18:39.695 { 00:18:39.695 "subsystem": "bdev", 00:18:39.695 "config": [ 00:18:39.695 { 00:18:39.695 "params": { 00:18:39.695 "io_mechanism": "io_uring", 00:18:39.695 "conserve_cpu": true, 00:18:39.695 "filename": "/dev/nvme0n1", 00:18:39.695 "name": "xnvme_bdev" 00:18:39.695 }, 00:18:39.695 "method": "bdev_xnvme_create" 00:18:39.695 }, 00:18:39.695 { 00:18:39.695 "method": "bdev_wait_for_examine" 00:18:39.695 } 00:18:39.695 ] 00:18:39.695 } 00:18:39.695 ] 00:18:39.695 } 00:18:39.695 [2024-11-25 10:23:46.700422] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:18:39.695 [2024-11-25 10:23:46.700568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71845 ] 00:18:39.954 [2024-11-25 10:23:46.888192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.954 [2024-11-25 10:23:47.017079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.557 Running I/O for 5 seconds... 00:18:42.472 51520.00 IOPS, 201.25 MiB/s [2024-11-25T10:23:50.521Z] 51360.00 IOPS, 200.62 MiB/s [2024-11-25T10:23:51.457Z] 50176.00 IOPS, 196.00 MiB/s [2024-11-25T10:23:52.410Z] 49200.00 IOPS, 192.19 MiB/s [2024-11-25T10:23:52.410Z] 46169.60 IOPS, 180.35 MiB/s 00:18:45.298 Latency(us) 00:18:45.298 [2024-11-25T10:23:52.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.298 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:45.298 xnvme_bdev : 5.00 46142.01 180.24 0.00 0.00 1383.30 743.53 6395.68 00:18:45.298 [2024-11-25T10:23:52.410Z] =================================================================================================================== 00:18:45.298 [2024-11-25T10:23:52.410Z] Total : 46142.01 180.24 0.00 0.00 1383.30 743.53 6395.68 00:18:46.675 10:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:46.675 10:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:46.675 10:23:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:46.675 10:23:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:46.675 10:23:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:46.675 { 00:18:46.675 "subsystems": [ 00:18:46.675 { 00:18:46.675 "subsystem": "bdev", 00:18:46.675 "config": [ 00:18:46.675 { 00:18:46.675 "params": { 00:18:46.675 "io_mechanism": "io_uring", 00:18:46.675 "conserve_cpu": true, 00:18:46.675 "filename": "/dev/nvme0n1", 00:18:46.675 "name": "xnvme_bdev" 00:18:46.675 }, 00:18:46.675 "method": "bdev_xnvme_create" 00:18:46.675 }, 00:18:46.675 { 00:18:46.675 "method": "bdev_wait_for_examine" 00:18:46.675 } 00:18:46.675 ] 00:18:46.675 } 00:18:46.675 ] 00:18:46.675 } 00:18:46.675 [2024-11-25 10:23:53.566626] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:18:46.675 [2024-11-25 10:23:53.566752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71926 ] 00:18:46.675 [2024-11-25 10:23:53.749419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.934 [2024-11-25 10:23:53.875299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.192 Running I/O for 5 seconds... 00:18:49.508 33344.00 IOPS, 130.25 MiB/s [2024-11-25T10:23:57.280Z] 33312.00 IOPS, 130.12 MiB/s [2024-11-25T10:23:58.657Z] 33770.67 IOPS, 131.92 MiB/s [2024-11-25T10:23:59.592Z] 34000.00 IOPS, 132.81 MiB/s 00:18:52.480 Latency(us) 00:18:52.480 [2024-11-25T10:23:59.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.480 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:52.480 xnvme_bdev : 5.00 34555.62 134.98 0.00 0.00 1846.85 901.45 7211.59 00:18:52.480 [2024-11-25T10:23:59.592Z] =================================================================================================================== 00:18:52.480 [2024-11-25T10:23:59.592Z] Total : 34555.62 134.98 0.00 0.00 1846.85 901.45 7211.59 00:18:53.420 00:18:53.420 real 0m13.736s 00:18:53.420 user 0m7.581s 00:18:53.420 sys 0m5.687s 00:18:53.420 10:24:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.420 10:24:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:53.420 ************************************ 00:18:53.420 END TEST xnvme_bdevperf 00:18:53.420 ************************************ 00:18:53.420 10:24:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:53.420 10:24:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:53.420 10:24:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.420 10:24:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.420 ************************************ 00:18:53.420 START TEST xnvme_fio_plugin 00:18:53.420 ************************************ 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:53.420 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:53.421 10:24:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:53.421 { 00:18:53.421 "subsystems": [ 00:18:53.421 { 00:18:53.421 "subsystem": "bdev", 00:18:53.421 "config": [ 00:18:53.421 { 00:18:53.421 "params": { 00:18:53.421 "io_mechanism": "io_uring", 00:18:53.421 "conserve_cpu": true, 00:18:53.421 "filename": "/dev/nvme0n1", 00:18:53.421 "name": "xnvme_bdev" 00:18:53.421 }, 00:18:53.421 "method": "bdev_xnvme_create" 00:18:53.421 }, 00:18:53.421 { 00:18:53.421 "method": "bdev_wait_for_examine" 00:18:53.421 } 00:18:53.421 ] 00:18:53.421 } 00:18:53.421 ] 00:18:53.421 } 00:18:53.749 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:53.749 fio-3.35 00:18:53.749 Starting 1 thread 00:19:00.316 00:19:00.316 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72045: Mon Nov 25 10:24:06 2024 00:19:00.316 read: IOPS=34.0k, BW=133MiB/s (139MB/s)(665MiB/5001msec) 00:19:00.316 slat (usec): min=2, max=107, avg= 4.83, stdev= 2.06 00:19:00.316 clat (usec): min=939, max=3329, avg=1686.04, stdev=358.93 00:19:00.316 lat (usec): min=942, max=3337, avg=1690.87, stdev=360.24 00:19:00.316 clat percentiles (usec): 00:19:00.316 | 1.00th=[ 1106], 5.00th=[ 1221], 10.00th=[ 1287], 20.00th=[ 1385], 00:19:00.316 | 30.00th=[ 1450], 40.00th=[ 1516], 50.00th=[ 1598], 60.00th=[ 1713], 00:19:00.316 | 70.00th=[ 1844], 80.00th=[ 2008], 90.00th=[ 2212], 95.00th=[ 2376], 00:19:00.316 | 99.00th=[ 2638], 99.50th=[ 2704], 99.90th=[ 2933], 99.95th=[ 3064], 00:19:00.316 | 99.99th=[ 3261] 00:19:00.316 bw ( KiB/s): min=99328, max=151040, per=98.47%, avg=134030.22, stdev=15588.23, samples=9 00:19:00.316 iops : min=24832, max=37760, avg=33507.56, stdev=3897.06, samples=9 00:19:00.316 lat (usec) : 1000=0.09% 00:19:00.316 lat (msec) : 2=79.80%, 4=20.11% 00:19:00.316 cpu : usr=46.54%, sys=49.82%, ctx=8, majf=0, minf=762 00:19:00.316 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:00.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.316 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:00.316 issued rwts: total=170176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.316 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:00.316 00:19:00.317 Run status group 0 (all jobs): 00:19:00.317 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=665MiB (697MB), run=5001-5001msec 00:19:00.886 ----------------------------------------------------- 00:19:00.886 Suppressions used: 00:19:00.886 count bytes template 00:19:00.886 1 11 /usr/src/fio/parse.c 00:19:00.886 1 8 libtcmalloc_minimal.so 00:19:00.886 1 904 libcrypto.so 00:19:00.886 ----------------------------------------------------- 00:19:00.886 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.886 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.887 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:00.887 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:00.887 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:00.887 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:00.887 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:00.887 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:00.887 10:24:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:00.887 { 00:19:00.887 "subsystems": [ 00:19:00.887 { 00:19:00.887 "subsystem": "bdev", 00:19:00.887 "config": [ 00:19:00.887 { 00:19:00.887 "params": { 00:19:00.887 "io_mechanism": "io_uring", 00:19:00.887 "conserve_cpu": true, 00:19:00.887 "filename": "/dev/nvme0n1", 00:19:00.887 "name": "xnvme_bdev" 00:19:00.887 }, 00:19:00.887 "method": "bdev_xnvme_create" 00:19:00.887 }, 00:19:00.887 { 00:19:00.887 "method": "bdev_wait_for_examine" 00:19:00.887 } 00:19:00.887 ] 00:19:00.887 } 00:19:00.887 ] 00:19:00.887 } 00:19:01.145 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:01.145 fio-3.35 00:19:01.145 Starting 1 thread 00:19:07.706 00:19:07.706 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72142: Mon Nov 25 10:24:13 2024 00:19:07.706 write: IOPS=36.1k, BW=141MiB/s (148MB/s)(705MiB/5002msec); 0 zone resets 00:19:07.706 slat (nsec): min=2539, max=60836, avg=4674.16, stdev=2052.83 00:19:07.706 clat (usec): min=864, max=5778, avg=1587.21, stdev=373.50 00:19:07.706 lat (usec): min=867, max=5787, avg=1591.89, stdev=374.61 00:19:07.706 clat percentiles (usec): 00:19:07.706 | 1.00th=[ 971], 5.00th=[ 1057], 10.00th=[ 1123], 20.00th=[ 1270], 00:19:07.706 | 30.00th=[ 1385], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1631], 00:19:07.706 | 70.00th=[ 1729], 80.00th=[ 1844], 90.00th=[ 2057], 95.00th=[ 2245], 00:19:07.706 | 99.00th=[ 2737], 99.50th=[ 2933], 99.90th=[ 3261], 99.95th=[ 3490], 00:19:07.706 | 99.99th=[ 5669] 00:19:07.706 bw ( KiB/s): min=123392, max=171520, per=100.00%, avg=145152.33, stdev=15050.28, samples=9 00:19:07.706 iops : min=30848, max=42880, avg=36288.00, stdev=3762.69, samples=9 00:19:07.706 lat (usec) : 1000=1.94% 00:19:07.706 lat (msec) : 2=85.68%, 4=12.34%, 10=0.04% 00:19:07.706 cpu : usr=48.45%, sys=48.21%, ctx=12, majf=0, minf=762 00:19:07.706 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:07.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.706 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:07.706 issued rwts: total=0,180480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.706 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.706 00:19:07.706 Run status group 0 (all jobs): 00:19:07.706 WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=705MiB (739MB), run=5002-5002msec 00:19:08.272 ----------------------------------------------------- 00:19:08.272 Suppressions used: 00:19:08.272 count bytes template 00:19:08.272 1 11 /usr/src/fio/parse.c 00:19:08.272 1 8 libtcmalloc_minimal.so 00:19:08.272 1 904 libcrypto.so 00:19:08.272 ----------------------------------------------------- 00:19:08.272 00:19:08.272 00:19:08.272 real 0m14.779s 00:19:08.272 user 0m8.519s 00:19:08.272 sys 0m5.646s 00:19:08.272 10:24:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.272 ************************************ 00:19:08.272 END TEST xnvme_fio_plugin 00:19:08.272 ************************************ 00:19:08.272 10:24:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:08.272 10:24:15 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:08.272 10:24:15 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:19:08.272 10:24:15 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:19:08.272 10:24:15 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:19:08.272 10:24:15 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:08.272 10:24:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:08.272 10:24:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:08.272 10:24:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:08.272 10:24:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:08.272 10:24:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:08.272 10:24:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.272 10:24:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.272 ************************************ 00:19:08.272 START TEST xnvme_rpc 00:19:08.272 ************************************ 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72229 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72229 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72229 ']' 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.272 10:24:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:08.272 [2024-11-25 10:24:15.366001] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:19:08.272 [2024-11-25 10:24:15.366129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72229 ] 00:19:08.530 [2024-11-25 10:24:15.550871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.789 [2024-11-25 10:24:15.671625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.724 xnvme_bdev 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:09.724 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72229 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72229 ']' 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72229 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72229 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.725 killing process with pid 72229 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72229' 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72229 00:19:09.725 10:24:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72229 00:19:12.253 00:19:12.253 real 0m3.915s 00:19:12.253 user 0m3.990s 00:19:12.253 sys 0m0.530s 00:19:12.253 10:24:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.253 10:24:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:12.253 ************************************ 00:19:12.253 END TEST xnvme_rpc 00:19:12.253 ************************************ 00:19:12.253 10:24:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:12.253 10:24:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:12.253 10:24:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.253 10:24:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:12.253 ************************************ 00:19:12.253 START TEST xnvme_bdevperf 00:19:12.253 ************************************ 00:19:12.253 10:24:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:12.253 10:24:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:12.253 10:24:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:12.253 10:24:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:12.253 10:24:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:12.253 10:24:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:12.253 10:24:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:12.253 10:24:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:12.253 { 00:19:12.253 "subsystems": [ 00:19:12.253 { 00:19:12.253 "subsystem": "bdev", 00:19:12.253 "config": [ 00:19:12.253 { 00:19:12.253 "params": { 00:19:12.253 "io_mechanism": "io_uring_cmd", 00:19:12.253 "conserve_cpu": false, 00:19:12.253 "filename": "/dev/ng0n1", 00:19:12.253 "name": "xnvme_bdev" 00:19:12.253 }, 00:19:12.253 "method": "bdev_xnvme_create" 00:19:12.253 }, 00:19:12.253 { 00:19:12.253 "method": "bdev_wait_for_examine" 00:19:12.253 } 00:19:12.253 ] 00:19:12.253 } 00:19:12.253 ] 00:19:12.253 } 00:19:12.253 [2024-11-25 10:24:19.340465] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:19:12.253 [2024-11-25 10:24:19.340603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72314 ] 00:19:12.511 [2024-11-25 10:24:19.521004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.769 [2024-11-25 10:24:19.632718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.025 Running I/O for 5 seconds... 00:19:14.894 33664.00 IOPS, 131.50 MiB/s [2024-11-25T10:24:23.382Z] 33888.00 IOPS, 132.38 MiB/s [2024-11-25T10:24:24.332Z] 34026.67 IOPS, 132.92 MiB/s [2024-11-25T10:24:25.268Z] 33712.00 IOPS, 131.69 MiB/s [2024-11-25T10:24:25.268Z] 33638.40 IOPS, 131.40 MiB/s 00:19:18.156 Latency(us) 00:19:18.156 [2024-11-25T10:24:25.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.156 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:18.156 xnvme_bdev : 5.00 33636.04 131.39 0.00 0.00 1897.22 881.71 8790.77 00:19:18.156 [2024-11-25T10:24:25.268Z] =================================================================================================================== 00:19:18.156 [2024-11-25T10:24:25.268Z] Total : 33636.04 131.39 0.00 0.00 1897.22 881.71 8790.77 00:19:19.093 10:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:19.093 10:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:19.093 10:24:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:19.093 10:24:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:19.093 10:24:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:19.093 { 00:19:19.093 "subsystems": [ 00:19:19.093 { 00:19:19.093 "subsystem": "bdev", 00:19:19.093 "config": [ 00:19:19.093 { 00:19:19.093 "params": { 00:19:19.093 "io_mechanism": "io_uring_cmd", 00:19:19.093 "conserve_cpu": false, 00:19:19.093 "filename": "/dev/ng0n1", 00:19:19.093 "name": "xnvme_bdev" 00:19:19.093 }, 00:19:19.093 "method": "bdev_xnvme_create" 00:19:19.093 }, 00:19:19.093 { 00:19:19.093 "method": "bdev_wait_for_examine" 00:19:19.093 } 00:19:19.093 ] 00:19:19.093 } 00:19:19.093 ] 00:19:19.093 } 00:19:19.093 [2024-11-25 10:24:26.172634] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:19:19.093 [2024-11-25 10:24:26.172896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72388 ] 00:19:19.350 [2024-11-25 10:24:26.352561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.608 [2024-11-25 10:24:26.467781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.867 Running I/O for 5 seconds... 00:19:21.802 28541.00 IOPS, 111.49 MiB/s [2024-11-25T10:24:29.847Z] 29758.50 IOPS, 116.24 MiB/s [2024-11-25T10:24:31.223Z] 29439.00 IOPS, 115.00 MiB/s [2024-11-25T10:24:32.162Z] 29759.25 IOPS, 116.25 MiB/s 00:19:25.050 Latency(us) 00:19:25.050 [2024-11-25T10:24:32.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.050 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:25.050 xnvme_bdev : 5.00 30532.68 119.27 0.00 0.00 2089.67 1112.01 5553.45 00:19:25.050 [2024-11-25T10:24:32.162Z] =================================================================================================================== 00:19:25.050 [2024-11-25T10:24:32.162Z] Total : 30532.68 119.27 0.00 0.00 2089.67 1112.01 5553.45 00:19:25.988 10:24:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:25.988 10:24:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:25.988 10:24:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:25.988 10:24:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:25.988 10:24:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:25.988 { 00:19:25.988 "subsystems": [ 00:19:25.988 { 00:19:25.988 "subsystem": "bdev", 00:19:25.988 "config": [ 00:19:25.988 { 00:19:25.988 "params": { 00:19:25.988 "io_mechanism": "io_uring_cmd", 00:19:25.988 "conserve_cpu": false, 00:19:25.988 "filename": "/dev/ng0n1", 00:19:25.988 "name": "xnvme_bdev" 00:19:25.988 }, 00:19:25.988 "method": "bdev_xnvme_create" 00:19:25.988 }, 00:19:25.988 { 00:19:25.988 "method": "bdev_wait_for_examine" 00:19:25.988 } 00:19:25.988 ] 00:19:25.988 } 00:19:25.988 ] 00:19:25.988 } 00:19:25.988 [2024-11-25 10:24:33.043080] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:19:25.988 [2024-11-25 10:24:33.043461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72469 ] 00:19:26.247 [2024-11-25 10:24:33.222816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.247 [2024-11-25 10:24:33.342518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.814 Running I/O for 5 seconds... 00:19:28.734 80576.00 IOPS, 314.75 MiB/s [2024-11-25T10:24:36.783Z] 75456.00 IOPS, 294.75 MiB/s [2024-11-25T10:24:37.719Z] 72896.00 IOPS, 284.75 MiB/s [2024-11-25T10:24:39.091Z] 71920.00 IOPS, 280.94 MiB/s 00:19:31.979 Latency(us) 00:19:31.979 [2024-11-25T10:24:39.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.979 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:31.979 xnvme_bdev : 5.00 71304.45 278.53 0.00 0.00 894.65 437.56 2566.17 00:19:31.979 [2024-11-25T10:24:39.091Z] =================================================================================================================== 00:19:31.979 [2024-11-25T10:24:39.091Z] Total : 71304.45 278.53 0.00 0.00 894.65 437.56 2566.17 00:19:32.921 10:24:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:32.921 10:24:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:32.921 10:24:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:32.921 10:24:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:32.921 10:24:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:32.921 { 00:19:32.921 "subsystems": [ 00:19:32.921 { 00:19:32.921 "subsystem": "bdev", 00:19:32.921 "config": [ 00:19:32.921 { 00:19:32.921 "params": { 00:19:32.921 "io_mechanism": "io_uring_cmd", 00:19:32.921 "conserve_cpu": false, 00:19:32.921 "filename": "/dev/ng0n1", 00:19:32.921 "name": "xnvme_bdev" 00:19:32.921 }, 00:19:32.921 "method": "bdev_xnvme_create" 00:19:32.921 }, 00:19:32.921 { 00:19:32.921 "method": "bdev_wait_for_examine" 00:19:32.921 } 00:19:32.921 ] 00:19:32.921 } 00:19:32.921 ] 00:19:32.921 } 00:19:32.921 [2024-11-25 10:24:39.907240] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:19:32.921 [2024-11-25 10:24:39.907376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72549 ] 00:19:33.179 [2024-11-25 10:24:40.089882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.179 [2024-11-25 10:24:40.210839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.746 Running I/O for 5 seconds... 00:19:35.613 20806.00 IOPS, 81.27 MiB/s [2024-11-25T10:24:43.656Z] 35043.50 IOPS, 136.89 MiB/s [2024-11-25T10:24:44.628Z] 37519.33 IOPS, 146.56 MiB/s [2024-11-25T10:24:45.560Z] 40850.50 IOPS, 159.57 MiB/s [2024-11-25T10:24:45.560Z] 42271.00 IOPS, 165.12 MiB/s 00:19:38.448 Latency(us) 00:19:38.448 [2024-11-25T10:24:45.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.448 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:38.448 xnvme_bdev : 5.00 42254.20 165.06 0.00 0.00 1510.33 81.02 20845.19 00:19:38.448 [2024-11-25T10:24:45.560Z] =================================================================================================================== 00:19:38.448 [2024-11-25T10:24:45.560Z] Total : 42254.20 165.06 0.00 0.00 1510.33 81.02 20845.19 00:19:39.824 00:19:39.824 real 0m27.428s 00:19:39.824 user 0m13.876s 00:19:39.825 sys 0m13.160s 00:19:39.825 10:24:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.825 10:24:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:39.825 ************************************ 00:19:39.825 END TEST xnvme_bdevperf 00:19:39.825 ************************************ 00:19:39.825 10:24:46 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:39.825 10:24:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:39.825 10:24:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.825 10:24:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:39.825 ************************************ 00:19:39.825 START TEST xnvme_fio_plugin 00:19:39.825 ************************************ 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:39.825 10:24:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:39.825 { 00:19:39.825 "subsystems": [ 00:19:39.825 { 00:19:39.825 "subsystem": "bdev", 00:19:39.825 "config": [ 00:19:39.825 { 00:19:39.825 "params": { 00:19:39.825 "io_mechanism": "io_uring_cmd", 00:19:39.825 "conserve_cpu": false, 00:19:39.825 "filename": "/dev/ng0n1", 00:19:39.825 "name": "xnvme_bdev" 00:19:39.825 }, 00:19:39.825 "method": "bdev_xnvme_create" 00:19:39.825 }, 00:19:39.825 { 00:19:39.825 "method": "bdev_wait_for_examine" 00:19:39.825 } 00:19:39.825 ] 00:19:39.825 } 00:19:39.825 ] 00:19:39.825 } 00:19:40.084 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:40.084 fio-3.35 00:19:40.084 Starting 1 thread 00:19:46.680 00:19:46.680 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72671: Mon Nov 25 10:24:52 2024 00:19:46.680 read: IOPS=29.1k, BW=113MiB/s (119MB/s)(568MiB/5001msec) 00:19:46.680 slat (usec): min=2, max=145, avg= 6.37, stdev= 2.20 00:19:46.680 clat (usec): min=800, max=3601, avg=1951.97, stdev=283.79 00:19:46.680 lat (usec): min=802, max=3612, avg=1958.34, stdev=284.46 00:19:46.681 clat percentiles (usec): 00:19:46.681 | 1.00th=[ 1254], 5.00th=[ 1549], 10.00th=[ 1631], 20.00th=[ 1729], 00:19:46.681 | 30.00th=[ 1811], 40.00th=[ 1876], 50.00th=[ 1942], 60.00th=[ 1991], 00:19:46.681 | 70.00th=[ 2073], 80.00th=[ 2180], 90.00th=[ 2311], 95.00th=[ 2442], 00:19:46.681 | 99.00th=[ 2671], 99.50th=[ 2802], 99.90th=[ 3163], 99.95th=[ 3261], 00:19:46.681 | 99.99th=[ 3490] 00:19:46.681 bw ( KiB/s): min=104960, max=129024, per=99.53%, avg=115655.11, stdev=8410.85, samples=9 00:19:46.681 iops : min=26240, max=32256, avg=28913.78, stdev=2102.71, samples=9 00:19:46.681 lat (usec) : 1000=0.40% 00:19:46.681 lat (msec) : 2=59.83%, 4=39.77% 00:19:46.681 cpu : usr=35.22%, sys=63.68%, ctx=10, majf=0, minf=762 00:19:46.681 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:46.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.681 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:46.681 issued rwts: total=145280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.681 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:46.681 00:19:46.681 Run status group 0 (all jobs): 00:19:46.681 READ: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=568MiB (595MB), run=5001-5001msec 00:19:47.249 ----------------------------------------------------- 00:19:47.249 Suppressions used: 00:19:47.249 count bytes template 00:19:47.249 1 11 /usr/src/fio/parse.c 00:19:47.249 1 8 libtcmalloc_minimal.so 00:19:47.250 1 904 libcrypto.so 00:19:47.250 ----------------------------------------------------- 00:19:47.250 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:47.250 10:24:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:47.250 { 00:19:47.250 "subsystems": [ 00:19:47.250 { 00:19:47.250 "subsystem": "bdev", 00:19:47.250 "config": [ 00:19:47.250 { 00:19:47.250 "params": { 00:19:47.250 "io_mechanism": "io_uring_cmd", 00:19:47.250 "conserve_cpu": false, 00:19:47.250 "filename": "/dev/ng0n1", 00:19:47.250 "name": "xnvme_bdev" 00:19:47.250 }, 00:19:47.250 "method": "bdev_xnvme_create" 00:19:47.250 }, 00:19:47.250 { 00:19:47.250 "method": "bdev_wait_for_examine" 00:19:47.250 } 00:19:47.250 ] 00:19:47.250 } 00:19:47.250 ] 00:19:47.250 } 00:19:47.508 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:47.508 fio-3.35 00:19:47.508 Starting 1 thread 00:19:54.072 00:19:54.072 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72766: Mon Nov 25 10:25:00 2024 00:19:54.072 write: IOPS=29.6k, BW=116MiB/s (121MB/s)(578MiB/5002msec); 0 zone resets 00:19:54.072 slat (nsec): min=2491, max=64963, avg=6229.70, stdev=2306.23 00:19:54.072 clat (usec): min=875, max=4614, avg=1917.54, stdev=297.34 00:19:54.072 lat (usec): min=878, max=4622, avg=1923.77, stdev=298.21 00:19:54.072 clat percentiles (usec): 00:19:54.072 | 1.00th=[ 1336], 5.00th=[ 1516], 10.00th=[ 1582], 20.00th=[ 1663], 00:19:54.072 | 30.00th=[ 1745], 40.00th=[ 1811], 50.00th=[ 1876], 60.00th=[ 1958], 00:19:54.072 | 70.00th=[ 2040], 80.00th=[ 2147], 90.00th=[ 2311], 95.00th=[ 2442], 00:19:54.072 | 99.00th=[ 2704], 99.50th=[ 2835], 99.90th=[ 3195], 99.95th=[ 3359], 00:19:54.072 | 99.99th=[ 4555] 00:19:54.072 bw ( KiB/s): min=103936, max=136192, per=100.00%, avg=118556.44, stdev=9694.63, samples=9 00:19:54.072 iops : min=25984, max=34048, avg=29639.11, stdev=2423.66, samples=9 00:19:54.072 lat (usec) : 1000=0.15% 00:19:54.072 lat (msec) : 2=64.94%, 4=34.87%, 10=0.04% 00:19:54.072 cpu : usr=34.95%, sys=63.95%, ctx=10, majf=0, minf=762 00:19:54.072 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:54.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.072 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:54.072 issued rwts: total=0,147968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.072 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:54.072 00:19:54.072 Run status group 0 (all jobs): 00:19:54.072 WRITE: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=578MiB (606MB), run=5002-5002msec 00:19:54.640 ----------------------------------------------------- 00:19:54.640 Suppressions used: 00:19:54.640 count bytes template 00:19:54.640 1 11 /usr/src/fio/parse.c 00:19:54.640 1 8 libtcmalloc_minimal.so 00:19:54.640 1 904 libcrypto.so 00:19:54.640 ----------------------------------------------------- 00:19:54.640 00:19:54.640 ************************************ 00:19:54.641 END TEST xnvme_fio_plugin 00:19:54.641 ************************************ 00:19:54.641 00:19:54.641 real 0m14.765s 00:19:54.641 user 0m7.258s 00:19:54.641 sys 0m7.129s 00:19:54.641 10:25:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.641 10:25:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:54.641 10:25:01 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:54.641 10:25:01 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:54.641 10:25:01 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:54.641 10:25:01 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:54.641 10:25:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:54.641 10:25:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.641 10:25:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:54.641 ************************************ 00:19:54.641 START TEST xnvme_rpc 00:19:54.641 ************************************ 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:54.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72857 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72857 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72857 ']' 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.641 10:25:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:54.641 [2024-11-25 10:25:01.672793] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:19:54.641 [2024-11-25 10:25:01.672924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72857 ] 00:19:54.900 [2024-11-25 10:25:01.854947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.900 [2024-11-25 10:25:01.972477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.908 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.908 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:55.908 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:55.909 xnvme_bdev 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.909 10:25:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72857 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72857 ']' 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72857 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72857 00:19:56.167 killing process with pid 72857 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72857' 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72857 00:19:56.167 10:25:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72857 00:19:58.701 00:19:58.701 real 0m3.876s 00:19:58.701 user 0m3.957s 00:19:58.701 sys 0m0.541s 00:19:58.701 10:25:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.701 ************************************ 00:19:58.701 END TEST xnvme_rpc 00:19:58.701 ************************************ 00:19:58.701 10:25:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:58.701 10:25:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:58.701 10:25:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:58.701 10:25:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.701 10:25:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:58.701 ************************************ 00:19:58.701 START TEST xnvme_bdevperf 00:19:58.701 ************************************ 00:19:58.701 10:25:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:58.701 10:25:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:58.701 10:25:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:58.701 10:25:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:58.701 10:25:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:58.701 10:25:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:58.701 10:25:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:58.701 10:25:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:58.701 { 00:19:58.701 "subsystems": [ 00:19:58.701 { 00:19:58.701 "subsystem": "bdev", 00:19:58.701 "config": [ 00:19:58.701 { 00:19:58.701 "params": { 00:19:58.701 "io_mechanism": "io_uring_cmd", 00:19:58.701 "conserve_cpu": true, 00:19:58.701 "filename": "/dev/ng0n1", 00:19:58.701 "name": "xnvme_bdev" 00:19:58.701 }, 00:19:58.701 "method": "bdev_xnvme_create" 00:19:58.701 }, 00:19:58.701 { 00:19:58.701 "method": "bdev_wait_for_examine" 00:19:58.701 } 00:19:58.701 ] 00:19:58.701 } 00:19:58.701 ] 00:19:58.701 } 00:19:58.701 [2024-11-25 10:25:05.601891] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:19:58.701 [2024-11-25 10:25:05.602376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72935 ] 00:19:58.701 [2024-11-25 10:25:05.777983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.963 [2024-11-25 10:25:05.898030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.222 Running I/O for 5 seconds... 00:20:01.537 47744.00 IOPS, 186.50 MiB/s [2024-11-25T10:25:09.607Z] 41248.00 IOPS, 161.12 MiB/s [2024-11-25T10:25:10.542Z] 37696.00 IOPS, 147.25 MiB/s [2024-11-25T10:25:11.474Z] 35872.00 IOPS, 140.12 MiB/s 00:20:04.362 Latency(us) 00:20:04.362 [2024-11-25T10:25:11.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.362 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:04.362 xnvme_bdev : 5.00 35178.40 137.42 0.00 0.00 1813.97 769.85 6395.68 00:20:04.362 [2024-11-25T10:25:11.474Z] =================================================================================================================== 00:20:04.362 [2024-11-25T10:25:11.474Z] Total : 35178.40 137.42 0.00 0.00 1813.97 769.85 6395.68 00:20:05.296 10:25:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:05.296 10:25:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:05.296 10:25:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:05.296 10:25:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:05.296 10:25:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:05.296 { 00:20:05.296 "subsystems": [ 00:20:05.296 { 00:20:05.296 "subsystem": "bdev", 00:20:05.296 "config": [ 00:20:05.296 { 00:20:05.296 "params": { 00:20:05.296 "io_mechanism": "io_uring_cmd", 00:20:05.296 "conserve_cpu": true, 00:20:05.296 "filename": "/dev/ng0n1", 00:20:05.296 "name": "xnvme_bdev" 00:20:05.296 }, 00:20:05.297 "method": "bdev_xnvme_create" 00:20:05.297 }, 00:20:05.297 { 00:20:05.297 "method": "bdev_wait_for_examine" 00:20:05.297 } 00:20:05.297 ] 00:20:05.297 } 00:20:05.297 ] 00:20:05.297 } 00:20:05.554 [2024-11-25 10:25:12.449877] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:20:05.554 [2024-11-25 10:25:12.449993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73015 ] 00:20:05.554 [2024-11-25 10:25:12.630375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.812 [2024-11-25 10:25:12.748624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.070 Running I/O for 5 seconds... 00:20:08.385 30400.00 IOPS, 118.75 MiB/s [2024-11-25T10:25:16.433Z] 32480.00 IOPS, 126.88 MiB/s [2024-11-25T10:25:17.371Z] 34581.33 IOPS, 135.08 MiB/s [2024-11-25T10:25:18.307Z] 34464.00 IOPS, 134.62 MiB/s [2024-11-25T10:25:18.307Z] 33753.60 IOPS, 131.85 MiB/s 00:20:11.195 Latency(us) 00:20:11.195 [2024-11-25T10:25:18.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.195 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:11.195 xnvme_bdev : 5.01 33705.68 131.66 0.00 0.00 1892.98 802.75 7895.90 00:20:11.195 [2024-11-25T10:25:18.307Z] =================================================================================================================== 00:20:11.195 [2024-11-25T10:25:18.307Z] Total : 33705.68 131.66 0.00 0.00 1892.98 802.75 7895.90 00:20:12.132 10:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:12.132 10:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:20:12.132 10:25:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:12.132 10:25:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:12.132 10:25:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:12.391 { 00:20:12.391 "subsystems": [ 00:20:12.391 { 00:20:12.391 "subsystem": "bdev", 00:20:12.391 "config": [ 00:20:12.391 { 00:20:12.391 "params": { 00:20:12.391 "io_mechanism": "io_uring_cmd", 00:20:12.391 "conserve_cpu": true, 00:20:12.391 "filename": "/dev/ng0n1", 00:20:12.391 "name": "xnvme_bdev" 00:20:12.391 }, 00:20:12.391 "method": "bdev_xnvme_create" 00:20:12.391 }, 00:20:12.391 { 00:20:12.391 "method": "bdev_wait_for_examine" 00:20:12.391 } 00:20:12.391 ] 00:20:12.391 } 00:20:12.391 ] 00:20:12.391 } 00:20:12.391 [2024-11-25 10:25:19.306774] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:20:12.391 [2024-11-25 10:25:19.306896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73093 ] 00:20:12.391 [2024-11-25 10:25:19.475708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.649 [2024-11-25 10:25:19.591780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.908 Running I/O for 5 seconds... 00:20:15.260 69696.00 IOPS, 272.25 MiB/s [2024-11-25T10:25:22.938Z] 69440.00 IOPS, 271.25 MiB/s [2024-11-25T10:25:24.309Z] 69418.67 IOPS, 271.17 MiB/s [2024-11-25T10:25:25.243Z] 70512.00 IOPS, 275.44 MiB/s [2024-11-25T10:25:25.243Z] 70464.00 IOPS, 275.25 MiB/s 00:20:18.131 Latency(us) 00:20:18.131 [2024-11-25T10:25:25.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.131 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:20:18.131 xnvme_bdev : 5.00 70452.48 275.20 0.00 0.00 905.54 370.12 3013.60 00:20:18.131 [2024-11-25T10:25:25.243Z] =================================================================================================================== 00:20:18.131 [2024-11-25T10:25:25.243Z] Total : 70452.48 275.20 0.00 0.00 905.54 370.12 3013.60 00:20:19.067 10:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:19.067 10:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:19.067 10:25:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:20:19.067 10:25:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:19.067 10:25:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:19.067 { 00:20:19.067 "subsystems": [ 00:20:19.067 { 00:20:19.067 "subsystem": "bdev", 00:20:19.067 "config": [ 00:20:19.067 { 00:20:19.067 "params": { 00:20:19.067 "io_mechanism": "io_uring_cmd", 00:20:19.067 "conserve_cpu": true, 00:20:19.067 "filename": "/dev/ng0n1", 00:20:19.067 "name": "xnvme_bdev" 00:20:19.067 }, 00:20:19.067 "method": "bdev_xnvme_create" 00:20:19.067 }, 00:20:19.067 { 00:20:19.067 "method": "bdev_wait_for_examine" 00:20:19.067 } 00:20:19.067 ] 00:20:19.067 } 00:20:19.067 ] 00:20:19.067 } 00:20:19.067 [2024-11-25 10:25:26.159852] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:20:19.067 [2024-11-25 10:25:26.160307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73168 ] 00:20:19.325 [2024-11-25 10:25:26.342566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.583 [2024-11-25 10:25:26.468862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.841 Running I/O for 5 seconds... 00:20:22.141 53250.00 IOPS, 208.01 MiB/s [2024-11-25T10:25:30.185Z] 52550.00 IOPS, 205.27 MiB/s [2024-11-25T10:25:31.119Z] 51629.33 IOPS, 201.68 MiB/s [2024-11-25T10:25:32.050Z] 52343.00 IOPS, 204.46 MiB/s [2024-11-25T10:25:32.050Z] 50808.20 IOPS, 198.47 MiB/s 00:20:24.938 Latency(us) 00:20:24.938 [2024-11-25T10:25:32.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.938 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:20:24.938 xnvme_bdev : 5.00 50772.99 198.33 0.00 0.00 1254.62 66.21 16107.64 00:20:24.938 [2024-11-25T10:25:32.050Z] =================================================================================================================== 00:20:24.938 [2024-11-25T10:25:32.050Z] Total : 50772.99 198.33 0.00 0.00 1254.62 66.21 16107.64 00:20:25.869 ************************************ 00:20:25.869 END TEST xnvme_bdevperf 00:20:25.869 ************************************ 00:20:25.869 00:20:25.869 real 0m27.441s 00:20:25.869 user 0m16.788s 00:20:25.869 sys 0m8.677s 00:20:25.869 10:25:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.869 10:25:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:26.126 10:25:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:26.126 10:25:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:26.126 10:25:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.126 10:25:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:26.126 ************************************ 00:20:26.126 START TEST xnvme_fio_plugin 00:20:26.126 ************************************ 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:26.126 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:26.127 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:26.127 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:26.127 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:26.127 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:26.127 10:25:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:26.127 { 00:20:26.127 "subsystems": [ 00:20:26.127 { 00:20:26.127 "subsystem": "bdev", 00:20:26.127 "config": [ 00:20:26.127 { 00:20:26.127 "params": { 00:20:26.127 "io_mechanism": "io_uring_cmd", 00:20:26.127 "conserve_cpu": true, 00:20:26.127 "filename": "/dev/ng0n1", 00:20:26.127 "name": "xnvme_bdev" 00:20:26.127 }, 00:20:26.127 "method": "bdev_xnvme_create" 00:20:26.127 }, 00:20:26.127 { 00:20:26.127 "method": "bdev_wait_for_examine" 00:20:26.127 } 00:20:26.127 ] 00:20:26.127 } 00:20:26.127 ] 00:20:26.127 } 00:20:26.383 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:26.383 fio-3.35 00:20:26.384 Starting 1 thread 00:20:32.945 00:20:32.945 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73292: Mon Nov 25 10:25:39 2024 00:20:32.945 read: IOPS=37.9k, BW=148MiB/s (155MB/s)(741MiB/5001msec) 00:20:32.945 slat (nsec): min=2235, max=70356, avg=4619.31, stdev=2363.07 00:20:32.945 clat (usec): min=709, max=3303, avg=1500.78, stdev=409.08 00:20:32.945 lat (usec): min=712, max=3339, avg=1505.40, stdev=410.59 00:20:32.945 clat percentiles (usec): 00:20:32.945 | 1.00th=[ 832], 5.00th=[ 938], 10.00th=[ 1029], 20.00th=[ 1156], 00:20:32.945 | 30.00th=[ 1254], 40.00th=[ 1336], 50.00th=[ 1418], 60.00th=[ 1516], 00:20:32.945 | 70.00th=[ 1647], 80.00th=[ 1876], 90.00th=[ 2114], 95.00th=[ 2278], 00:20:32.945 | 99.00th=[ 2540], 99.50th=[ 2606], 99.90th=[ 2835], 99.95th=[ 2933], 00:20:32.945 | 99.99th=[ 3130] 00:20:32.945 bw ( KiB/s): min=112640, max=183296, per=95.52%, avg=144979.78, stdev=24298.48, samples=9 00:20:32.945 iops : min=28160, max=45824, avg=36244.89, stdev=6074.65, samples=9 00:20:32.945 lat (usec) : 750=0.04%, 1000=8.49% 00:20:32.945 lat (msec) : 2=76.82%, 4=14.65% 00:20:32.945 cpu : usr=51.26%, sys=46.04%, ctx=13, majf=0, minf=762 00:20:32.945 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:32.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.945 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:32.945 issued rwts: total=189760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.945 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:32.945 00:20:32.945 Run status group 0 (all jobs): 00:20:32.945 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=741MiB (777MB), run=5001-5001msec 00:20:33.515 ----------------------------------------------------- 00:20:33.515 Suppressions used: 00:20:33.515 count bytes template 00:20:33.515 1 11 /usr/src/fio/parse.c 00:20:33.515 1 8 libtcmalloc_minimal.so 00:20:33.515 1 904 libcrypto.so 00:20:33.515 ----------------------------------------------------- 00:20:33.515 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:33.515 10:25:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:33.515 { 00:20:33.515 "subsystems": [ 00:20:33.515 { 00:20:33.515 "subsystem": "bdev", 00:20:33.515 "config": [ 00:20:33.515 { 00:20:33.515 "params": { 00:20:33.515 "io_mechanism": "io_uring_cmd", 00:20:33.515 "conserve_cpu": true, 00:20:33.515 "filename": "/dev/ng0n1", 00:20:33.515 "name": "xnvme_bdev" 00:20:33.515 }, 00:20:33.515 "method": "bdev_xnvme_create" 00:20:33.515 }, 00:20:33.515 { 00:20:33.515 "method": "bdev_wait_for_examine" 00:20:33.515 } 00:20:33.515 ] 00:20:33.515 } 00:20:33.515 ] 00:20:33.515 } 00:20:33.773 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:33.773 fio-3.35 00:20:33.773 Starting 1 thread 00:20:40.341 00:20:40.341 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73389: Mon Nov 25 10:25:46 2024 00:20:40.341 write: IOPS=31.7k, BW=124MiB/s (130MB/s)(619MiB/5001msec); 0 zone resets 00:20:40.341 slat (usec): min=2, max=656, avg= 6.26, stdev= 7.71 00:20:40.341 clat (usec): min=51, max=28227, avg=1789.84, stdev=855.07 00:20:40.341 lat (usec): min=55, max=28232, avg=1796.10, stdev=856.04 00:20:40.341 clat percentiles (usec): 00:20:40.341 | 1.00th=[ 273], 5.00th=[ 914], 10.00th=[ 1020], 20.00th=[ 1188], 00:20:40.341 | 30.00th=[ 1418], 40.00th=[ 1582], 50.00th=[ 1745], 60.00th=[ 1926], 00:20:40.341 | 70.00th=[ 2114], 80.00th=[ 2278], 90.00th=[ 2507], 95.00th=[ 2704], 00:20:40.341 | 99.00th=[ 3916], 99.50th=[ 4555], 99.90th=[ 8356], 99.95th=[ 9634], 00:20:40.341 | 99.99th=[27919] 00:20:40.341 bw ( KiB/s): min=96256, max=152000, per=99.42%, avg=126056.89, stdev=21969.99, samples=9 00:20:40.341 iops : min=24064, max=38000, avg=31514.22, stdev=5492.50, samples=9 00:20:40.341 lat (usec) : 100=0.09%, 250=0.79%, 500=1.14%, 750=0.93%, 1000=6.00% 00:20:40.341 lat (msec) : 2=54.74%, 4=35.38%, 10=0.89%, 20=0.01%, 50=0.04% 00:20:40.341 cpu : usr=51.28%, sys=42.74%, ctx=26, majf=0, minf=762 00:20:40.341 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.2%, 16=23.0%, 32=53.8%, >=64=2.5% 00:20:40.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.341 complete : 0=0.0%, 4=98.1%, 8=0.2%, 16=0.2%, 32=0.1%, 64=1.4%, >=64=0.0% 00:20:40.341 issued rwts: total=0,158518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:40.341 00:20:40.341 Run status group 0 (all jobs): 00:20:40.341 WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=619MiB (649MB), run=5001-5001msec 00:20:40.906 ----------------------------------------------------- 00:20:40.906 Suppressions used: 00:20:40.906 count bytes template 00:20:40.906 1 11 /usr/src/fio/parse.c 00:20:40.906 1 8 libtcmalloc_minimal.so 00:20:40.906 1 904 libcrypto.so 00:20:40.906 ----------------------------------------------------- 00:20:40.906 00:20:40.906 00:20:40.906 real 0m14.755s 00:20:40.906 user 0m8.892s 00:20:40.906 sys 0m5.167s 00:20:40.906 10:25:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.906 10:25:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:40.906 ************************************ 00:20:40.906 END TEST xnvme_fio_plugin 00:20:40.906 ************************************ 00:20:40.906 10:25:47 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72857 00:20:40.906 10:25:47 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72857 ']' 00:20:40.906 10:25:47 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72857 00:20:40.906 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72857) - No such process 00:20:40.906 Process with pid 72857 is not found 00:20:40.906 10:25:47 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72857 is not found' 00:20:40.906 00:20:40.906 real 3m51.089s 00:20:40.906 user 2m5.579s 00:20:40.906 sys 1m29.274s 00:20:40.906 10:25:47 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.906 ************************************ 00:20:40.906 END TEST nvme_xnvme 00:20:40.906 ************************************ 00:20:40.906 10:25:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:40.906 10:25:47 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:40.906 10:25:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:40.906 10:25:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.906 10:25:47 -- common/autotest_common.sh@10 -- # set +x 00:20:40.906 ************************************ 00:20:40.906 START TEST blockdev_xnvme 00:20:40.906 ************************************ 00:20:40.906 10:25:47 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:41.165 * Looking for test storage... 00:20:41.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:41.165 10:25:48 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:41.165 10:25:48 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:20:41.165 10:25:48 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.166 10:25:48 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:41.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.166 --rc genhtml_branch_coverage=1 00:20:41.166 --rc genhtml_function_coverage=1 00:20:41.166 --rc genhtml_legend=1 00:20:41.166 --rc geninfo_all_blocks=1 00:20:41.166 --rc geninfo_unexecuted_blocks=1 00:20:41.166 00:20:41.166 ' 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:41.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.166 --rc genhtml_branch_coverage=1 00:20:41.166 --rc genhtml_function_coverage=1 00:20:41.166 --rc genhtml_legend=1 00:20:41.166 --rc geninfo_all_blocks=1 00:20:41.166 --rc geninfo_unexecuted_blocks=1 00:20:41.166 00:20:41.166 ' 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:41.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.166 --rc genhtml_branch_coverage=1 00:20:41.166 --rc genhtml_function_coverage=1 00:20:41.166 --rc genhtml_legend=1 00:20:41.166 --rc geninfo_all_blocks=1 00:20:41.166 --rc geninfo_unexecuted_blocks=1 00:20:41.166 00:20:41.166 ' 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:41.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.166 --rc genhtml_branch_coverage=1 00:20:41.166 --rc genhtml_function_coverage=1 00:20:41.166 --rc genhtml_legend=1 00:20:41.166 --rc geninfo_all_blocks=1 00:20:41.166 --rc geninfo_unexecuted_blocks=1 00:20:41.166 00:20:41.166 ' 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73528 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:41.166 10:25:48 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73528 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73528 ']' 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.166 10:25:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:41.166 [2024-11-25 10:25:48.268729] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:20:41.166 [2024-11-25 10:25:48.269111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73528 ] 00:20:41.425 [2024-11-25 10:25:48.452273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.683 [2024-11-25 10:25:48.579953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.620 10:25:49 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.620 10:25:49 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:20:42.620 10:25:49 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:42.620 10:25:49 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:20:42.620 10:25:49 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:20:42.620 10:25:49 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:20:42.620 10:25:49 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:43.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:43.753 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:43.753 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:43.753 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:20:43.753 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:20:43.753 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:43.753 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:20:44.024 nvme0n1 00:20:44.024 nvme0n2 00:20:44.024 nvme0n3 00:20:44.024 nvme1n1 00:20:44.024 nvme2n1 00:20:44.024 nvme3n1 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.024 10:25:50 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.024 10:25:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:44.024 10:25:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.024 10:25:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:44.024 10:25:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:44.024 10:25:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:44.024 10:25:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.024 10:25:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:44.024 10:25:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.024 10:25:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:44.024 10:25:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:44.025 10:25:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0098008f-8a4f-4a32-a396-5bdd4ba98292"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0098008f-8a4f-4a32-a396-5bdd4ba98292",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "dcb773c5-107f-47a7-bd1e-7a13ed71bd58"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dcb773c5-107f-47a7-bd1e-7a13ed71bd58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "6bcd9e1f-eedf-4818-ae9c-e935c3a41ff6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6bcd9e1f-eedf-4818-ae9c-e935c3a41ff6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "64ffe7ae-c484-490d-96ab-c6b48b495fab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "64ffe7ae-c484-490d-96ab-c6b48b495fab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e4347bc3-3c43-4e41-b6b3-08f7032d0874"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e4347bc3-3c43-4e41-b6b3-08f7032d0874",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5530eab5-5eaf-4462-a404-fab1b7a3a0e6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5530eab5-5eaf-4462-a404-fab1b7a3a0e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:44.025 10:25:51 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:44.025 10:25:51 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:20:44.025 10:25:51 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:44.025 10:25:51 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73528 00:20:44.025 10:25:51 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73528 ']' 00:20:44.025 10:25:51 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73528 00:20:44.025 10:25:51 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:20:44.293 10:25:51 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.293 10:25:51 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73528 00:20:44.293 killing process with pid 73528 00:20:44.293 10:25:51 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.293 10:25:51 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.293 10:25:51 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73528' 00:20:44.293 10:25:51 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73528 00:20:44.293 10:25:51 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73528 00:20:46.825 10:25:53 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:46.825 10:25:53 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:46.825 10:25:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:46.825 10:25:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.825 10:25:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:46.825 ************************************ 00:20:46.825 START TEST bdev_hello_world 00:20:46.825 ************************************ 00:20:46.825 10:25:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:46.825 [2024-11-25 10:25:53.710551] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:20:46.825 [2024-11-25 10:25:53.710679] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73819 ] 00:20:46.826 [2024-11-25 10:25:53.892093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.084 [2024-11-25 10:25:54.013358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.651 [2024-11-25 10:25:54.472796] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:47.651 [2024-11-25 10:25:54.473044] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:20:47.651 [2024-11-25 10:25:54.473080] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:47.651 [2024-11-25 10:25:54.475412] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:47.651 [2024-11-25 10:25:54.475772] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:47.651 [2024-11-25 10:25:54.475796] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:47.651 [2024-11-25 10:25:54.476002] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:47.651 00:20:47.651 [2024-11-25 10:25:54.476027] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:48.587 00:20:48.587 real 0m1.967s 00:20:48.587 user 0m1.595s 00:20:48.587 sys 0m0.254s 00:20:48.587 ************************************ 00:20:48.587 END TEST bdev_hello_world 00:20:48.587 ************************************ 00:20:48.587 10:25:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.587 10:25:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:48.587 10:25:55 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:48.587 10:25:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.587 10:25:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.587 10:25:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:48.587 ************************************ 00:20:48.587 START TEST bdev_bounds 00:20:48.587 ************************************ 00:20:48.587 Process bdevio pid: 73861 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73861 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73861' 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73861 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73861 ']' 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.587 10:25:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:48.845 [2024-11-25 10:25:55.745197] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:20:48.845 [2024-11-25 10:25:55.745537] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73861 ] 00:20:48.845 [2024-11-25 10:25:55.927654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:49.102 [2024-11-25 10:25:56.055855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.102 [2024-11-25 10:25:56.055910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.102 [2024-11-25 10:25:56.055944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.673 10:25:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.673 10:25:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:49.673 10:25:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:49.673 I/O targets: 00:20:49.673 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:49.673 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:49.673 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:49.673 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:49.673 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:20:49.673 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:20:49.673 00:20:49.673 00:20:49.673 CUnit - A unit testing framework for C - Version 2.1-3 00:20:49.673 http://cunit.sourceforge.net/ 00:20:49.673 00:20:49.673 00:20:49.673 Suite: bdevio tests on: nvme3n1 00:20:49.673 Test: blockdev write read block ...passed 00:20:49.673 Test: blockdev write zeroes read block ...passed 00:20:49.673 Test: blockdev write zeroes read no split ...passed 00:20:49.673 Test: blockdev write zeroes read split ...passed 00:20:49.673 Test: blockdev write zeroes read split partial ...passed 00:20:49.673 Test: blockdev reset ...passed 00:20:49.673 Test: blockdev write read 8 blocks ...passed 00:20:49.673 Test: blockdev write read size > 128k ...passed 00:20:49.673 Test: blockdev write read invalid size ...passed 00:20:49.673 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:49.673 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:49.673 Test: blockdev write read max offset ...passed 00:20:49.673 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:49.673 Test: blockdev writev readv 8 blocks ...passed 00:20:49.673 Test: blockdev writev readv 30 x 1block ...passed 00:20:49.673 Test: blockdev writev readv block ...passed 00:20:49.673 Test: blockdev writev readv size > 128k ...passed 00:20:49.673 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:49.673 Test: blockdev comparev and writev ...passed 00:20:49.673 Test: blockdev nvme passthru rw ...passed 00:20:49.673 Test: blockdev nvme passthru vendor specific ...passed 00:20:49.673 Test: blockdev nvme admin passthru ...passed 00:20:49.673 Test: blockdev copy ...passed 00:20:49.673 Suite: bdevio tests on: nvme2n1 00:20:49.673 Test: blockdev write read block ...passed 00:20:49.673 Test: blockdev write zeroes read block ...passed 00:20:49.673 Test: blockdev write zeroes read no split ...passed 00:20:49.932 Test: blockdev write zeroes read split ...passed 00:20:49.932 Test: blockdev write zeroes read split partial ...passed 00:20:49.932 Test: blockdev reset ...passed 00:20:49.932 Test: blockdev write read 8 blocks ...passed 00:20:49.932 Test: blockdev write read size > 128k ...passed 00:20:49.932 Test: blockdev write read invalid size ...passed 00:20:49.932 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:49.932 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:49.932 Test: blockdev write read max offset ...passed 00:20:49.932 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:49.932 Test: blockdev writev readv 8 blocks ...passed 00:20:49.932 Test: blockdev writev readv 30 x 1block ...passed 00:20:49.932 Test: blockdev writev readv block ...passed 00:20:49.932 Test: blockdev writev readv size > 128k ...passed 00:20:49.932 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:49.932 Test: blockdev comparev and writev ...passed 00:20:49.932 Test: blockdev nvme passthru rw ...passed 00:20:49.932 Test: blockdev nvme passthru vendor specific ...passed 00:20:49.932 Test: blockdev nvme admin passthru ...passed 00:20:49.932 Test: blockdev copy ...passed 00:20:49.932 Suite: bdevio tests on: nvme1n1 00:20:49.932 Test: blockdev write read block ...passed 00:20:49.932 Test: blockdev write zeroes read block ...passed 00:20:49.932 Test: blockdev write zeroes read no split ...passed 00:20:49.932 Test: blockdev write zeroes read split ...passed 00:20:49.932 Test: blockdev write zeroes read split partial ...passed 00:20:49.932 Test: blockdev reset ...passed 00:20:49.932 Test: blockdev write read 8 blocks ...passed 00:20:49.932 Test: blockdev write read size > 128k ...passed 00:20:49.932 Test: blockdev write read invalid size ...passed 00:20:49.932 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:49.932 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:49.932 Test: blockdev write read max offset ...passed 00:20:49.932 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:49.932 Test: blockdev writev readv 8 blocks ...passed 00:20:49.932 Test: blockdev writev readv 30 x 1block ...passed 00:20:49.932 Test: blockdev writev readv block ...passed 00:20:49.932 Test: blockdev writev readv size > 128k ...passed 00:20:49.932 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:49.932 Test: blockdev comparev and writev ...passed 00:20:49.932 Test: blockdev nvme passthru rw ...passed 00:20:49.932 Test: blockdev nvme passthru vendor specific ...passed 00:20:49.932 Test: blockdev nvme admin passthru ...passed 00:20:49.932 Test: blockdev copy ...passed 00:20:49.932 Suite: bdevio tests on: nvme0n3 00:20:49.932 Test: blockdev write read block ...passed 00:20:49.932 Test: blockdev write zeroes read block ...passed 00:20:49.932 Test: blockdev write zeroes read no split ...passed 00:20:49.932 Test: blockdev write zeroes read split ...passed 00:20:49.932 Test: blockdev write zeroes read split partial ...passed 00:20:49.932 Test: blockdev reset ...passed 00:20:49.932 Test: blockdev write read 8 blocks ...passed 00:20:49.932 Test: blockdev write read size > 128k ...passed 00:20:49.932 Test: blockdev write read invalid size ...passed 00:20:49.932 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:49.932 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:49.932 Test: blockdev write read max offset ...passed 00:20:49.932 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:49.932 Test: blockdev writev readv 8 blocks ...passed 00:20:49.932 Test: blockdev writev readv 30 x 1block ...passed 00:20:49.932 Test: blockdev writev readv block ...passed 00:20:49.932 Test: blockdev writev readv size > 128k ...passed 00:20:49.932 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:49.932 Test: blockdev comparev and writev ...passed 00:20:49.932 Test: blockdev nvme passthru rw ...passed 00:20:49.932 Test: blockdev nvme passthru vendor specific ...passed 00:20:49.932 Test: blockdev nvme admin passthru ...passed 00:20:49.932 Test: blockdev copy ...passed 00:20:49.932 Suite: bdevio tests on: nvme0n2 00:20:49.932 Test: blockdev write read block ...passed 00:20:49.932 Test: blockdev write zeroes read block ...passed 00:20:50.190 Test: blockdev write zeroes read no split ...passed 00:20:50.190 Test: blockdev write zeroes read split ...passed 00:20:50.190 Test: blockdev write zeroes read split partial ...passed 00:20:50.190 Test: blockdev reset ...passed 00:20:50.190 Test: blockdev write read 8 blocks ...passed 00:20:50.190 Test: blockdev write read size > 128k ...passed 00:20:50.190 Test: blockdev write read invalid size ...passed 00:20:50.190 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:50.190 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:50.190 Test: blockdev write read max offset ...passed 00:20:50.190 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:50.190 Test: blockdev writev readv 8 blocks ...passed 00:20:50.190 Test: blockdev writev readv 30 x 1block ...passed 00:20:50.190 Test: blockdev writev readv block ...passed 00:20:50.190 Test: blockdev writev readv size > 128k ...passed 00:20:50.190 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:50.190 Test: blockdev comparev and writev ...passed 00:20:50.190 Test: blockdev nvme passthru rw ...passed 00:20:50.190 Test: blockdev nvme passthru vendor specific ...passed 00:20:50.190 Test: blockdev nvme admin passthru ...passed 00:20:50.190 Test: blockdev copy ...passed 00:20:50.190 Suite: bdevio tests on: nvme0n1 00:20:50.190 Test: blockdev write read block ...passed 00:20:50.190 Test: blockdev write zeroes read block ...passed 00:20:50.190 Test: blockdev write zeroes read no split ...passed 00:20:50.190 Test: blockdev write zeroes read split ...passed 00:20:50.190 Test: blockdev write zeroes read split partial ...passed 00:20:50.190 Test: blockdev reset ...passed 00:20:50.190 Test: blockdev write read 8 blocks ...passed 00:20:50.190 Test: blockdev write read size > 128k ...passed 00:20:50.190 Test: blockdev write read invalid size ...passed 00:20:50.190 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:50.190 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:50.190 Test: blockdev write read max offset ...passed 00:20:50.190 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:50.190 Test: blockdev writev readv 8 blocks ...passed 00:20:50.190 Test: blockdev writev readv 30 x 1block ...passed 00:20:50.190 Test: blockdev writev readv block ...passed 00:20:50.190 Test: blockdev writev readv size > 128k ...passed 00:20:50.190 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:50.190 Test: blockdev comparev and writev ...passed 00:20:50.190 Test: blockdev nvme passthru rw ...passed 00:20:50.190 Test: blockdev nvme passthru vendor specific ...passed 00:20:50.190 Test: blockdev nvme admin passthru ...passed 00:20:50.190 Test: blockdev copy ...passed 00:20:50.190 00:20:50.190 Run Summary: Type Total Ran Passed Failed Inactive 00:20:50.190 suites 6 6 n/a 0 0 00:20:50.190 tests 138 138 138 0 0 00:20:50.190 asserts 780 780 780 0 n/a 00:20:50.190 00:20:50.190 Elapsed time = 1.410 seconds 00:20:50.190 0 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73861 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73861 ']' 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73861 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73861 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73861' 00:20:50.190 killing process with pid 73861 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73861 00:20:50.190 10:25:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73861 00:20:51.564 10:25:58 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:51.564 00:20:51.564 real 0m2.747s 00:20:51.564 user 0m6.737s 00:20:51.564 sys 0m0.445s 00:20:51.564 10:25:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.564 10:25:58 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:51.564 ************************************ 00:20:51.564 END TEST bdev_bounds 00:20:51.564 ************************************ 00:20:51.564 10:25:58 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:51.564 10:25:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:51.564 10:25:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.564 10:25:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:51.564 ************************************ 00:20:51.564 START TEST bdev_nbd 00:20:51.564 ************************************ 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73925 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73925 /var/tmp/spdk-nbd.sock 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73925 ']' 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:51.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.564 10:25:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:51.564 [2024-11-25 10:25:58.558557] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:20:51.564 [2024-11-25 10:25:58.558685] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.823 [2024-11-25 10:25:58.739452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.823 [2024-11-25 10:25:58.867702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:52.390 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:20:52.647 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:52.647 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.648 1+0 records in 00:20:52.648 1+0 records out 00:20:52.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665635 s, 6.2 MB/s 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:52.648 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.906 1+0 records in 00:20:52.906 1+0 records out 00:20:52.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654569 s, 6.3 MB/s 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:52.906 10:25:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:20:53.165 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:53.165 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:53.165 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.166 1+0 records in 00:20:53.166 1+0 records out 00:20:53.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621295 s, 6.6 MB/s 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:53.166 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.425 1+0 records in 00:20:53.425 1+0 records out 00:20:53.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000923999 s, 4.4 MB/s 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:53.425 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.683 1+0 records in 00:20:53.683 1+0 records out 00:20:53.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076783 s, 5.3 MB/s 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:53.683 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:53.942 10:26:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.942 1+0 records in 00:20:53.942 1+0 records out 00:20:53.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628453 s, 6.5 MB/s 00:20:53.942 10:26:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.942 10:26:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:53.942 10:26:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.942 10:26:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:53.942 10:26:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:53.942 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:53.942 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:53.942 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:54.200 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:54.200 { 00:20:54.200 "nbd_device": "/dev/nbd0", 00:20:54.200 "bdev_name": "nvme0n1" 00:20:54.200 }, 00:20:54.200 { 00:20:54.200 "nbd_device": "/dev/nbd1", 00:20:54.200 "bdev_name": "nvme0n2" 00:20:54.200 }, 00:20:54.200 { 00:20:54.200 "nbd_device": "/dev/nbd2", 00:20:54.200 "bdev_name": "nvme0n3" 00:20:54.200 }, 00:20:54.200 { 00:20:54.200 "nbd_device": "/dev/nbd3", 00:20:54.200 "bdev_name": "nvme1n1" 00:20:54.200 }, 00:20:54.200 { 00:20:54.200 "nbd_device": "/dev/nbd4", 00:20:54.200 "bdev_name": "nvme2n1" 00:20:54.200 }, 00:20:54.200 { 00:20:54.200 "nbd_device": "/dev/nbd5", 00:20:54.200 "bdev_name": "nvme3n1" 00:20:54.200 } 00:20:54.200 ]' 00:20:54.200 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:54.200 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:54.200 { 00:20:54.200 "nbd_device": "/dev/nbd0", 00:20:54.200 "bdev_name": "nvme0n1" 00:20:54.200 }, 00:20:54.200 { 00:20:54.200 "nbd_device": "/dev/nbd1", 00:20:54.200 "bdev_name": "nvme0n2" 00:20:54.200 }, 00:20:54.200 { 00:20:54.200 "nbd_device": "/dev/nbd2", 00:20:54.200 "bdev_name": "nvme0n3" 00:20:54.200 }, 00:20:54.200 { 00:20:54.200 "nbd_device": "/dev/nbd3", 00:20:54.201 "bdev_name": "nvme1n1" 00:20:54.201 }, 00:20:54.201 { 00:20:54.201 "nbd_device": "/dev/nbd4", 00:20:54.201 "bdev_name": "nvme2n1" 00:20:54.201 }, 00:20:54.201 { 00:20:54.201 "nbd_device": "/dev/nbd5", 00:20:54.201 "bdev_name": "nvme3n1" 00:20:54.201 } 00:20:54.201 ]' 00:20:54.201 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:54.201 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:20:54.201 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:54.201 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:20:54.201 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.201 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:54.201 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.201 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:54.459 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:54.459 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:54.459 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:54.459 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.459 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.459 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:54.459 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:54.459 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.459 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.459 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:54.719 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:54.719 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:54.719 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:54.719 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.719 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.719 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:54.719 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:54.719 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.719 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.719 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:54.977 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:54.977 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:54.977 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:54.977 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:54.977 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:54.977 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:54.977 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:54.977 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:54.977 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.977 10:26:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:55.234 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.492 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:55.492 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:55.751 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:56.010 10:26:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:20:56.010 /dev/nbd0 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:56.269 1+0 records in 00:20:56.269 1+0 records out 00:20:56.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630972 s, 6.5 MB/s 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:56.269 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:20:56.269 /dev/nbd1 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:56.528 1+0 records in 00:20:56.528 1+0 records out 00:20:56.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734262 s, 5.6 MB/s 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:56.528 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:20:56.528 /dev/nbd10 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:56.787 1+0 records in 00:20:56.787 1+0 records out 00:20:56.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571992 s, 7.2 MB/s 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:56.787 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:20:56.787 /dev/nbd11 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.046 1+0 records in 00:20:57.046 1+0 records out 00:20:57.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650359 s, 6.3 MB/s 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:57.046 10:26:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:20:57.046 /dev/nbd12 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.305 1+0 records in 00:20:57.305 1+0 records out 00:20:57.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805443 s, 5.1 MB/s 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:20:57.305 /dev/nbd13 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:57.305 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.564 1+0 records in 00:20:57.564 1+0 records out 00:20:57.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674956 s, 6.1 MB/s 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd0", 00:20:57.564 "bdev_name": "nvme0n1" 00:20:57.564 }, 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd1", 00:20:57.564 "bdev_name": "nvme0n2" 00:20:57.564 }, 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd10", 00:20:57.564 "bdev_name": "nvme0n3" 00:20:57.564 }, 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd11", 00:20:57.564 "bdev_name": "nvme1n1" 00:20:57.564 }, 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd12", 00:20:57.564 "bdev_name": "nvme2n1" 00:20:57.564 }, 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd13", 00:20:57.564 "bdev_name": "nvme3n1" 00:20:57.564 } 00:20:57.564 ]' 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:57.564 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd0", 00:20:57.564 "bdev_name": "nvme0n1" 00:20:57.564 }, 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd1", 00:20:57.564 "bdev_name": "nvme0n2" 00:20:57.564 }, 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd10", 00:20:57.564 "bdev_name": "nvme0n3" 00:20:57.564 }, 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd11", 00:20:57.564 "bdev_name": "nvme1n1" 00:20:57.564 }, 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd12", 00:20:57.564 "bdev_name": "nvme2n1" 00:20:57.564 }, 00:20:57.564 { 00:20:57.564 "nbd_device": "/dev/nbd13", 00:20:57.564 "bdev_name": "nvme3n1" 00:20:57.564 } 00:20:57.564 ]' 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:57.823 /dev/nbd1 00:20:57.823 /dev/nbd10 00:20:57.823 /dev/nbd11 00:20:57.823 /dev/nbd12 00:20:57.823 /dev/nbd13' 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:57.823 /dev/nbd1 00:20:57.823 /dev/nbd10 00:20:57.823 /dev/nbd11 00:20:57.823 /dev/nbd12 00:20:57.823 /dev/nbd13' 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:57.823 256+0 records in 00:20:57.823 256+0 records out 00:20:57.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00722023 s, 145 MB/s 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:57.823 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:57.824 256+0 records in 00:20:57.824 256+0 records out 00:20:57.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116034 s, 9.0 MB/s 00:20:57.824 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:57.824 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:58.082 256+0 records in 00:20:58.082 256+0 records out 00:20:58.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122491 s, 8.6 MB/s 00:20:58.082 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:58.082 10:26:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:58.082 256+0 records in 00:20:58.082 256+0 records out 00:20:58.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121664 s, 8.6 MB/s 00:20:58.082 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:58.082 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:58.355 256+0 records in 00:20:58.355 256+0 records out 00:20:58.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123359 s, 8.5 MB/s 00:20:58.355 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:58.355 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:58.355 256+0 records in 00:20:58.355 256+0 records out 00:20:58.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150651 s, 7.0 MB/s 00:20:58.355 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:58.355 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:58.649 256+0 records in 00:20:58.649 256+0 records out 00:20:58.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123298 s, 8.5 MB/s 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:58.649 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:58.650 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:58.650 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:58.650 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.650 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:58.908 10:26:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:58.908 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:58.908 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.908 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.908 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:58.908 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:58.908 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.908 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.908 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:59.166 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:59.166 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:59.166 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:59.166 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.166 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.166 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:59.166 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:59.166 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.166 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.166 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:59.425 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:59.425 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:59.425 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:59.425 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.425 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.425 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:59.425 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:59.425 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.425 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.425 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:59.683 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:59.683 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:59.683 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:59.684 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.684 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.684 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:59.684 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:59.684 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.684 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.684 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:59.942 10:26:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:00.201 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:00.459 malloc_lvol_verify 00:21:00.459 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:00.719 2051ca46-a1da-4adc-b85b-8d70cac852d0 00:21:00.719 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:00.719 7275bbd0-8403-411e-9ee8-71b4425889d8 00:21:00.978 10:26:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:00.978 /dev/nbd0 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:00.978 mke2fs 1.47.0 (5-Feb-2023) 00:21:00.978 Discarding device blocks: 0/4096 done 00:21:00.978 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:00.978 00:21:00.978 Allocating group tables: 0/1 done 00:21:00.978 Writing inode tables: 0/1 done 00:21:00.978 Creating journal (1024 blocks): done 00:21:00.978 Writing superblocks and filesystem accounting information: 0/1 done 00:21:00.978 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.978 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73925 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73925 ']' 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73925 00:21:01.237 10:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:21:01.238 10:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.238 10:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73925 00:21:01.496 killing process with pid 73925 00:21:01.496 10:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.496 10:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.496 10:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73925' 00:21:01.497 10:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73925 00:21:01.497 10:26:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73925 00:21:02.432 10:26:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:02.432 00:21:02.432 real 0m11.063s 00:21:02.432 user 0m14.337s 00:21:02.432 sys 0m4.698s 00:21:02.432 10:26:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.432 10:26:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:02.432 ************************************ 00:21:02.432 END TEST bdev_nbd 00:21:02.432 ************************************ 00:21:02.691 10:26:09 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:02.691 10:26:09 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:21:02.691 10:26:09 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:21:02.691 10:26:09 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:21:02.691 10:26:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:02.691 10:26:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.691 10:26:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:02.691 ************************************ 00:21:02.691 START TEST bdev_fio 00:21:02.691 ************************************ 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:02.691 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:02.691 ************************************ 00:21:02.691 START TEST bdev_fio_rw_verify 00:21:02.691 ************************************ 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:02.691 10:26:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:02.951 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:02.951 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:02.951 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:02.951 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:02.951 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:02.951 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:02.951 fio-3.35 00:21:02.951 Starting 6 threads 00:21:15.159 00:21:15.159 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74335: Mon Nov 25 10:26:20 2024 00:21:15.159 read: IOPS=33.7k, BW=132MiB/s (138MB/s)(1318MiB/10001msec) 00:21:15.159 slat (usec): min=2, max=865, avg= 5.95, stdev= 5.20 00:21:15.159 clat (usec): min=111, max=6507, avg=546.51, stdev=236.83 00:21:15.159 lat (usec): min=113, max=6525, avg=552.46, stdev=237.67 00:21:15.159 clat percentiles (usec): 00:21:15.159 | 50.000th=[ 537], 99.000th=[ 1221], 99.900th=[ 2180], 99.990th=[ 3851], 00:21:15.159 | 99.999th=[ 6456] 00:21:15.159 write: IOPS=34.1k, BW=133MiB/s (140MB/s)(1332MiB/10001msec); 0 zone resets 00:21:15.159 slat (usec): min=11, max=3002, avg=24.16, stdev=33.28 00:21:15.159 clat (usec): min=67, max=5101, avg=623.50, stdev=257.91 00:21:15.159 lat (usec): min=80, max=5123, avg=647.66, stdev=263.70 00:21:15.159 clat percentiles (usec): 00:21:15.159 | 50.000th=[ 603], 99.000th=[ 1434], 99.900th=[ 2180], 99.990th=[ 3916], 00:21:15.159 | 99.999th=[ 4883] 00:21:15.159 bw ( KiB/s): min=106320, max=170824, per=100.00%, avg=136732.63, stdev=3126.87, samples=114 00:21:15.159 iops : min=26579, max=42706, avg=34182.89, stdev=781.75, samples=114 00:21:15.159 lat (usec) : 100=0.01%, 250=6.32%, 500=31.63%, 750=41.65%, 1000=15.76% 00:21:15.159 lat (msec) : 2=4.48%, 4=0.15%, 10=0.01% 00:21:15.159 cpu : usr=56.60%, sys=28.61%, ctx=9174, majf=0, minf=27973 00:21:15.159 IO depths : 1=12.0%, 2=24.4%, 4=50.6%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.159 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.159 issued rwts: total=337533,341112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.159 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:15.159 00:21:15.160 Run status group 0 (all jobs): 00:21:15.160 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=1318MiB (1383MB), run=10001-10001msec 00:21:15.160 WRITE: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=1332MiB (1397MB), run=10001-10001msec 00:21:15.160 ----------------------------------------------------- 00:21:15.160 Suppressions used: 00:21:15.160 count bytes template 00:21:15.160 6 48 /usr/src/fio/parse.c 00:21:15.160 3346 321216 /usr/src/fio/iolog.c 00:21:15.160 1 8 libtcmalloc_minimal.so 00:21:15.160 1 904 libcrypto.so 00:21:15.160 ----------------------------------------------------- 00:21:15.160 00:21:15.160 00:21:15.160 real 0m12.562s 00:21:15.160 user 0m35.966s 00:21:15.160 sys 0m17.603s 00:21:15.160 10:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.160 10:26:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:15.160 ************************************ 00:21:15.160 END TEST bdev_fio_rw_verify 00:21:15.160 ************************************ 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0098008f-8a4f-4a32-a396-5bdd4ba98292"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0098008f-8a4f-4a32-a396-5bdd4ba98292",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "dcb773c5-107f-47a7-bd1e-7a13ed71bd58"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dcb773c5-107f-47a7-bd1e-7a13ed71bd58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "6bcd9e1f-eedf-4818-ae9c-e935c3a41ff6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6bcd9e1f-eedf-4818-ae9c-e935c3a41ff6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "64ffe7ae-c484-490d-96ab-c6b48b495fab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "64ffe7ae-c484-490d-96ab-c6b48b495fab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e4347bc3-3c43-4e41-b6b3-08f7032d0874"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e4347bc3-3c43-4e41-b6b3-08f7032d0874",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5530eab5-5eaf-4462-a404-fab1b7a3a0e6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5530eab5-5eaf-4462-a404-fab1b7a3a0e6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:15.421 /home/vagrant/spdk_repo/spdk 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:15.421 00:21:15.421 real 0m12.791s 00:21:15.421 user 0m36.078s 00:21:15.421 sys 0m17.724s 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.421 10:26:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:15.421 ************************************ 00:21:15.421 END TEST bdev_fio 00:21:15.421 ************************************ 00:21:15.421 10:26:22 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:15.422 10:26:22 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:15.422 10:26:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:15.422 10:26:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.422 10:26:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:15.422 ************************************ 00:21:15.422 START TEST bdev_verify 00:21:15.422 ************************************ 00:21:15.422 10:26:22 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:15.679 [2024-11-25 10:26:22.561347] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:21:15.679 [2024-11-25 10:26:22.561491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74504 ] 00:21:15.679 [2024-11-25 10:26:22.729586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:15.942 [2024-11-25 10:26:22.896329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.942 [2024-11-25 10:26:22.896358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.508 Running I/O for 5 seconds... 00:21:18.821 27968.00 IOPS, 109.25 MiB/s [2024-11-25T10:26:26.869Z] 26944.00 IOPS, 105.25 MiB/s [2024-11-25T10:26:27.807Z] 26741.33 IOPS, 104.46 MiB/s [2024-11-25T10:26:28.745Z] 25888.00 IOPS, 101.12 MiB/s 00:21:21.633 Latency(us) 00:21:21.633 [2024-11-25T10:26:28.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.633 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0x0 length 0x80000 00:21:21.633 nvme0n1 : 5.03 1985.56 7.76 0.00 0.00 64362.72 12580.81 58113.85 00:21:21.633 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0x80000 length 0x80000 00:21:21.633 nvme0n1 : 5.05 1925.90 7.52 0.00 0.00 66359.19 10685.79 67378.38 00:21:21.633 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0x0 length 0x80000 00:21:21.633 nvme0n2 : 5.03 1985.05 7.75 0.00 0.00 64290.60 14528.46 56008.28 00:21:21.633 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0x80000 length 0x80000 00:21:21.633 nvme0n2 : 5.05 1925.02 7.52 0.00 0.00 66292.91 13686.23 63167.23 00:21:21.633 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0x0 length 0x80000 00:21:21.633 nvme0n3 : 5.04 1982.84 7.75 0.00 0.00 64275.60 13896.79 61482.77 00:21:21.633 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0x80000 length 0x80000 00:21:21.633 nvme0n3 : 5.06 1923.13 7.51 0.00 0.00 66255.31 13159.84 62325.00 00:21:21.633 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0x0 length 0x20000 00:21:21.633 nvme1n1 : 5.04 1982.21 7.74 0.00 0.00 64205.63 9948.84 61482.77 00:21:21.633 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0x20000 length 0x20000 00:21:21.633 nvme1n1 : 5.06 1922.41 7.51 0.00 0.00 66188.68 10106.76 69483.95 00:21:21.633 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0x0 length 0xbd0bd 00:21:21.633 nvme2n1 : 5.05 2874.73 11.23 0.00 0.00 44124.92 4605.94 53060.47 00:21:21.633 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:21:21.633 nvme2n1 : 5.05 2808.02 10.97 0.00 0.00 45228.89 4974.42 54744.93 00:21:21.633 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0x0 length 0xa0000 00:21:21.633 nvme3n1 : 5.05 2002.33 7.82 0.00 0.00 63394.62 4526.98 60219.42 00:21:21.633 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.633 Verification LBA range: start 0xa0000 length 0xa0000 00:21:21.633 nvme3n1 : 5.06 1946.68 7.60 0.00 0.00 65042.84 6737.84 66536.15 00:21:21.633 [2024-11-25T10:26:28.745Z] =================================================================================================================== 00:21:21.633 [2024-11-25T10:26:28.745Z] Total : 25263.88 98.69 0.00 0.00 60464.29 4526.98 69483.95 00:21:22.580 00:21:22.580 real 0m7.122s 00:21:22.580 user 0m10.790s 00:21:22.580 sys 0m2.130s 00:21:22.580 10:26:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.580 10:26:29 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:22.580 ************************************ 00:21:22.580 END TEST bdev_verify 00:21:22.580 ************************************ 00:21:22.580 10:26:29 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:22.580 10:26:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:22.580 10:26:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.580 10:26:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:22.580 ************************************ 00:21:22.580 START TEST bdev_verify_big_io 00:21:22.580 ************************************ 00:21:22.580 10:26:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:22.839 [2024-11-25 10:26:29.752628] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:21:22.839 [2024-11-25 10:26:29.752748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74609 ] 00:21:22.839 [2024-11-25 10:26:29.934826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:23.098 [2024-11-25 10:26:30.054129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.098 [2024-11-25 10:26:30.054161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.666 Running I/O for 5 seconds... 00:21:28.335 2128.00 IOPS, 133.00 MiB/s [2024-11-25T10:26:36.384Z] 3232.00 IOPS, 202.00 MiB/s [2024-11-25T10:26:36.643Z] 3835.67 IOPS, 239.73 MiB/s 00:21:29.531 Latency(us) 00:21:29.531 [2024-11-25T10:26:36.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.532 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0x0 length 0x8000 00:21:29.532 nvme0n1 : 5.62 159.56 9.97 0.00 0.00 776058.46 10369.95 976986.47 00:21:29.532 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0x8000 length 0x8000 00:21:29.532 nvme0n1 : 5.62 155.10 9.69 0.00 0.00 792695.36 86749.66 1064578.36 00:21:29.532 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0x0 length 0x8000 00:21:29.532 nvme0n2 : 5.54 150.17 9.39 0.00 0.00 810181.46 146547.97 781589.18 00:21:29.532 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0x8000 length 0x8000 00:21:29.532 nvme0n2 : 5.52 162.17 10.14 0.00 0.00 741444.62 109489.86 670414.86 00:21:29.532 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0x0 length 0x8000 00:21:29.532 nvme0n3 : 5.62 156.67 9.79 0.00 0.00 752889.96 87591.89 1900070.25 00:21:29.532 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0x8000 length 0x8000 00:21:29.532 nvme0n3 : 5.69 157.41 9.84 0.00 0.00 758750.98 58956.08 1374518.90 00:21:29.532 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0x0 length 0x2000 00:21:29.532 nvme1n1 : 5.62 179.29 11.21 0.00 0.00 650551.02 59377.20 1367781.06 00:21:29.532 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0x2000 length 0x2000 00:21:29.532 nvme1n1 : 5.68 154.93 9.68 0.00 0.00 749073.37 56850.51 1448635.12 00:21:29.532 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0x0 length 0xbd0b 00:21:29.532 nvme2n1 : 5.66 211.90 13.24 0.00 0.00 538789.27 42322.04 677152.69 00:21:29.532 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:29.532 nvme2n1 : 5.70 207.68 12.98 0.00 0.00 548583.73 28635.81 1138694.58 00:21:29.532 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0x0 length 0xa000 00:21:29.532 nvme3n1 : 5.67 191.94 12.00 0.00 0.00 580208.08 980.41 629987.83 00:21:29.532 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:29.532 Verification LBA range: start 0xa000 length 0xa000 00:21:29.532 nvme3n1 : 5.71 190.38 11.90 0.00 0.00 584706.23 9211.89 1320616.20 00:21:29.532 [2024-11-25T10:26:36.644Z] =================================================================================================================== 00:21:29.532 [2024-11-25T10:26:36.644Z] Total : 2077.22 129.83 0.00 0.00 677969.72 980.41 1900070.25 00:21:30.908 00:21:30.908 real 0m8.125s 00:21:30.908 user 0m14.725s 00:21:30.908 sys 0m0.575s 00:21:30.908 10:26:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.908 10:26:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.908 ************************************ 00:21:30.908 END TEST bdev_verify_big_io 00:21:30.908 ************************************ 00:21:30.908 10:26:37 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:30.908 10:26:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:30.908 10:26:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.908 10:26:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:30.908 ************************************ 00:21:30.908 START TEST bdev_write_zeroes 00:21:30.908 ************************************ 00:21:30.908 10:26:37 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:30.908 [2024-11-25 10:26:37.948855] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:21:30.908 [2024-11-25 10:26:37.948978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74720 ] 00:21:31.194 [2024-11-25 10:26:38.130725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.194 [2024-11-25 10:26:38.265657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.762 Running I/O for 1 seconds... 00:21:32.695 56864.00 IOPS, 222.12 MiB/s 00:21:32.695 Latency(us) 00:21:32.695 [2024-11-25T10:26:39.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.695 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:32.695 nvme0n1 : 1.03 9164.26 35.80 0.00 0.00 13955.38 8369.66 28214.70 00:21:32.695 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:32.695 nvme0n2 : 1.03 9153.44 35.76 0.00 0.00 13963.13 8422.30 27583.02 00:21:32.695 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:32.695 nvme0n3 : 1.04 9143.17 35.72 0.00 0.00 13970.49 8317.02 26846.07 00:21:32.695 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:32.695 nvme1n1 : 1.04 9133.02 35.68 0.00 0.00 13977.60 8317.02 28635.81 00:21:32.695 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:32.695 nvme2n1 : 1.03 10570.77 41.29 0.00 0.00 12067.17 5106.02 28425.25 00:21:32.695 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:32.695 nvme3n1 : 1.04 9122.48 35.63 0.00 0.00 13890.81 3632.12 34110.30 00:21:32.695 [2024-11-25T10:26:39.807Z] =================================================================================================================== 00:21:32.695 [2024-11-25T10:26:39.807Z] Total : 56287.14 219.87 0.00 0.00 13598.63 3632.12 34110.30 00:21:34.080 00:21:34.080 real 0m3.047s 00:21:34.080 user 0m2.262s 00:21:34.080 sys 0m0.593s 00:21:34.080 10:26:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.080 10:26:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:34.080 ************************************ 00:21:34.080 END TEST bdev_write_zeroes 00:21:34.080 ************************************ 00:21:34.080 10:26:40 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:34.080 10:26:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:34.080 10:26:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.080 10:26:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:34.080 ************************************ 00:21:34.080 START TEST bdev_json_nonenclosed 00:21:34.080 ************************************ 00:21:34.080 10:26:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:34.080 [2024-11-25 10:26:41.053604] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:21:34.080 [2024-11-25 10:26:41.053722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74778 ] 00:21:34.338 [2024-11-25 10:26:41.233784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.338 [2024-11-25 10:26:41.347673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.338 [2024-11-25 10:26:41.347764] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:34.338 [2024-11-25 10:26:41.347787] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:34.338 [2024-11-25 10:26:41.347799] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:34.597 00:21:34.598 real 0m0.645s 00:21:34.598 user 0m0.401s 00:21:34.598 sys 0m0.139s 00:21:34.598 10:26:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.598 10:26:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:34.598 ************************************ 00:21:34.598 END TEST bdev_json_nonenclosed 00:21:34.598 ************************************ 00:21:34.598 10:26:41 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:34.598 10:26:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:34.598 10:26:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.598 10:26:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:34.598 ************************************ 00:21:34.598 START TEST bdev_json_nonarray 00:21:34.598 ************************************ 00:21:34.598 10:26:41 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:34.856 [2024-11-25 10:26:41.760518] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:21:34.856 [2024-11-25 10:26:41.760627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74805 ] 00:21:34.856 [2024-11-25 10:26:41.941979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.115 [2024-11-25 10:26:42.063505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.115 [2024-11-25 10:26:42.063615] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:35.115 [2024-11-25 10:26:42.063638] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:35.115 [2024-11-25 10:26:42.063651] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:35.375 00:21:35.375 real 0m0.655s 00:21:35.375 user 0m0.410s 00:21:35.375 sys 0m0.139s 00:21:35.375 10:26:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.375 10:26:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:35.375 ************************************ 00:21:35.375 END TEST bdev_json_nonarray 00:21:35.375 ************************************ 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:21:35.375 10:26:42 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:35.947 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:44.059 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:44.059 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:44.059 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:44.059 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:44.318 00:21:44.318 real 1m3.248s 00:21:44.318 user 1m34.345s 00:21:44.318 sys 0m39.508s 00:21:44.318 10:26:51 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.318 10:26:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:44.318 ************************************ 00:21:44.318 END TEST blockdev_xnvme 00:21:44.318 ************************************ 00:21:44.318 10:26:51 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:44.318 10:26:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:44.318 10:26:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.318 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:21:44.318 ************************************ 00:21:44.318 START TEST ublk 00:21:44.318 ************************************ 00:21:44.318 10:26:51 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:44.318 * Looking for test storage... 00:21:44.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:44.318 10:26:51 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:44.318 10:26:51 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:21:44.318 10:26:51 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:44.577 10:26:51 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:44.577 10:26:51 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.577 10:26:51 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.577 10:26:51 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.577 10:26:51 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.577 10:26:51 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.577 10:26:51 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.577 10:26:51 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.577 10:26:51 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.577 10:26:51 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.577 10:26:51 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.577 10:26:51 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.577 10:26:51 ublk -- scripts/common.sh@344 -- # case "$op" in 00:21:44.577 10:26:51 ublk -- scripts/common.sh@345 -- # : 1 00:21:44.577 10:26:51 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.577 10:26:51 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.577 10:26:51 ublk -- scripts/common.sh@365 -- # decimal 1 00:21:44.577 10:26:51 ublk -- scripts/common.sh@353 -- # local d=1 00:21:44.577 10:26:51 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.577 10:26:51 ublk -- scripts/common.sh@355 -- # echo 1 00:21:44.577 10:26:51 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.577 10:26:51 ublk -- scripts/common.sh@366 -- # decimal 2 00:21:44.577 10:26:51 ublk -- scripts/common.sh@353 -- # local d=2 00:21:44.577 10:26:51 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.577 10:26:51 ublk -- scripts/common.sh@355 -- # echo 2 00:21:44.577 10:26:51 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.577 10:26:51 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.577 10:26:51 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.577 10:26:51 ublk -- scripts/common.sh@368 -- # return 0 00:21:44.577 10:26:51 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.577 10:26:51 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:44.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.577 --rc genhtml_branch_coverage=1 00:21:44.577 --rc genhtml_function_coverage=1 00:21:44.577 --rc genhtml_legend=1 00:21:44.577 --rc geninfo_all_blocks=1 00:21:44.577 --rc geninfo_unexecuted_blocks=1 00:21:44.577 00:21:44.577 ' 00:21:44.577 10:26:51 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:44.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.577 --rc genhtml_branch_coverage=1 00:21:44.577 --rc genhtml_function_coverage=1 00:21:44.577 --rc genhtml_legend=1 00:21:44.577 --rc geninfo_all_blocks=1 00:21:44.577 --rc geninfo_unexecuted_blocks=1 00:21:44.577 00:21:44.577 ' 00:21:44.577 10:26:51 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:44.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.577 --rc genhtml_branch_coverage=1 00:21:44.577 --rc genhtml_function_coverage=1 00:21:44.577 --rc genhtml_legend=1 00:21:44.577 --rc geninfo_all_blocks=1 00:21:44.577 --rc geninfo_unexecuted_blocks=1 00:21:44.577 00:21:44.577 ' 00:21:44.577 10:26:51 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:44.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.577 --rc genhtml_branch_coverage=1 00:21:44.577 --rc genhtml_function_coverage=1 00:21:44.577 --rc genhtml_legend=1 00:21:44.577 --rc geninfo_all_blocks=1 00:21:44.577 --rc geninfo_unexecuted_blocks=1 00:21:44.577 00:21:44.577 ' 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:44.577 10:26:51 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:44.577 10:26:51 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:44.577 10:26:51 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:44.577 10:26:51 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:44.577 10:26:51 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:44.577 10:26:51 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:44.577 10:26:51 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:44.577 10:26:51 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:21:44.577 10:26:51 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:21:44.577 10:26:51 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:44.577 10:26:51 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.577 10:26:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:44.577 ************************************ 00:21:44.577 START TEST test_save_ublk_config 00:21:44.577 ************************************ 00:21:44.577 10:26:51 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:21:44.577 10:26:51 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:21:44.577 10:26:51 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75121 00:21:44.577 10:26:51 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:21:44.577 10:26:51 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:21:44.577 10:26:51 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75121 00:21:44.577 10:26:51 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75121 ']' 00:21:44.578 10:26:51 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.578 10:26:51 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.578 10:26:51 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.578 10:26:51 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.578 10:26:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:44.578 [2024-11-25 10:26:51.647523] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:21:44.578 [2024-11-25 10:26:51.647728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75121 ] 00:21:44.836 [2024-11-25 10:26:51.844567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.095 [2024-11-25 10:26:51.974105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.032 10:26:52 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.032 10:26:52 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:46.032 10:26:52 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:21:46.032 10:26:52 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:21:46.032 10:26:52 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.032 10:26:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:46.032 [2024-11-25 10:26:52.980555] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:46.032 [2024-11-25 10:26:52.981743] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:46.032 malloc0 00:21:46.032 [2024-11-25 10:26:53.074695] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:46.032 [2024-11-25 10:26:53.074836] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:46.032 [2024-11-25 10:26:53.074850] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:46.032 [2024-11-25 10:26:53.074859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:46.032 [2024-11-25 10:26:53.082589] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:46.032 [2024-11-25 10:26:53.082618] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:46.032 [2024-11-25 10:26:53.090559] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:46.032 [2024-11-25 10:26:53.090675] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:46.032 [2024-11-25 10:26:53.114544] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:46.032 0 00:21:46.032 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.032 10:26:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:21:46.032 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.032 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:46.291 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.291 10:26:53 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:21:46.291 "subsystems": [ 00:21:46.291 { 00:21:46.291 "subsystem": "fsdev", 00:21:46.291 "config": [ 00:21:46.291 { 00:21:46.291 "method": "fsdev_set_opts", 00:21:46.291 "params": { 00:21:46.291 "fsdev_io_pool_size": 65535, 00:21:46.291 "fsdev_io_cache_size": 256 00:21:46.291 } 00:21:46.291 } 00:21:46.291 ] 00:21:46.291 }, 00:21:46.291 { 00:21:46.291 "subsystem": "keyring", 00:21:46.291 "config": [] 00:21:46.291 }, 00:21:46.291 { 00:21:46.291 "subsystem": "iobuf", 00:21:46.291 "config": [ 00:21:46.291 { 00:21:46.291 "method": "iobuf_set_options", 00:21:46.291 "params": { 00:21:46.291 "small_pool_count": 8192, 00:21:46.291 "large_pool_count": 1024, 00:21:46.291 "small_bufsize": 8192, 00:21:46.291 "large_bufsize": 135168, 00:21:46.291 "enable_numa": false 00:21:46.291 } 00:21:46.291 } 00:21:46.291 ] 00:21:46.291 }, 00:21:46.291 { 00:21:46.291 "subsystem": "sock", 00:21:46.291 "config": [ 00:21:46.291 { 00:21:46.291 "method": "sock_set_default_impl", 00:21:46.291 "params": { 00:21:46.291 "impl_name": "posix" 00:21:46.291 } 00:21:46.291 }, 00:21:46.291 { 00:21:46.291 "method": "sock_impl_set_options", 00:21:46.291 "params": { 00:21:46.291 "impl_name": "ssl", 00:21:46.291 "recv_buf_size": 4096, 00:21:46.291 "send_buf_size": 4096, 00:21:46.291 "enable_recv_pipe": true, 00:21:46.291 "enable_quickack": false, 00:21:46.291 "enable_placement_id": 0, 00:21:46.291 "enable_zerocopy_send_server": true, 00:21:46.291 "enable_zerocopy_send_client": false, 00:21:46.291 "zerocopy_threshold": 0, 00:21:46.291 "tls_version": 0, 00:21:46.291 "enable_ktls": false 00:21:46.291 } 00:21:46.291 }, 00:21:46.291 { 00:21:46.291 "method": "sock_impl_set_options", 00:21:46.291 "params": { 00:21:46.291 "impl_name": "posix", 00:21:46.291 "recv_buf_size": 2097152, 00:21:46.291 "send_buf_size": 2097152, 00:21:46.291 "enable_recv_pipe": true, 00:21:46.291 "enable_quickack": false, 00:21:46.291 "enable_placement_id": 0, 00:21:46.291 "enable_zerocopy_send_server": true, 00:21:46.291 "enable_zerocopy_send_client": false, 00:21:46.291 "zerocopy_threshold": 0, 00:21:46.291 "tls_version": 0, 00:21:46.291 "enable_ktls": false 00:21:46.291 } 00:21:46.291 } 00:21:46.291 ] 00:21:46.291 }, 00:21:46.291 { 00:21:46.291 "subsystem": "vmd", 00:21:46.291 "config": [] 00:21:46.291 }, 00:21:46.291 { 00:21:46.291 "subsystem": "accel", 00:21:46.291 "config": [ 00:21:46.291 { 00:21:46.291 "method": "accel_set_options", 00:21:46.291 "params": { 00:21:46.291 "small_cache_size": 128, 00:21:46.291 "large_cache_size": 16, 00:21:46.291 "task_count": 2048, 00:21:46.291 "sequence_count": 2048, 00:21:46.291 "buf_count": 2048 00:21:46.291 } 00:21:46.291 } 00:21:46.291 ] 00:21:46.291 }, 00:21:46.291 { 00:21:46.291 "subsystem": "bdev", 00:21:46.291 "config": [ 00:21:46.291 { 00:21:46.291 "method": "bdev_set_options", 00:21:46.291 "params": { 00:21:46.291 "bdev_io_pool_size": 65535, 00:21:46.291 "bdev_io_cache_size": 256, 00:21:46.291 "bdev_auto_examine": true, 00:21:46.291 "iobuf_small_cache_size": 128, 00:21:46.291 "iobuf_large_cache_size": 16 00:21:46.291 } 00:21:46.291 }, 00:21:46.291 { 00:21:46.291 "method": "bdev_raid_set_options", 00:21:46.291 "params": { 00:21:46.292 "process_window_size_kb": 1024, 00:21:46.292 "process_max_bandwidth_mb_sec": 0 00:21:46.292 } 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "method": "bdev_iscsi_set_options", 00:21:46.292 "params": { 00:21:46.292 "timeout_sec": 30 00:21:46.292 } 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "method": "bdev_nvme_set_options", 00:21:46.292 "params": { 00:21:46.292 "action_on_timeout": "none", 00:21:46.292 "timeout_us": 0, 00:21:46.292 "timeout_admin_us": 0, 00:21:46.292 "keep_alive_timeout_ms": 10000, 00:21:46.292 "arbitration_burst": 0, 00:21:46.292 "low_priority_weight": 0, 00:21:46.292 "medium_priority_weight": 0, 00:21:46.292 "high_priority_weight": 0, 00:21:46.292 "nvme_adminq_poll_period_us": 10000, 00:21:46.292 "nvme_ioq_poll_period_us": 0, 00:21:46.292 "io_queue_requests": 0, 00:21:46.292 "delay_cmd_submit": true, 00:21:46.292 "transport_retry_count": 4, 00:21:46.292 "bdev_retry_count": 3, 00:21:46.292 "transport_ack_timeout": 0, 00:21:46.292 "ctrlr_loss_timeout_sec": 0, 00:21:46.292 "reconnect_delay_sec": 0, 00:21:46.292 "fast_io_fail_timeout_sec": 0, 00:21:46.292 "disable_auto_failback": false, 00:21:46.292 "generate_uuids": false, 00:21:46.292 "transport_tos": 0, 00:21:46.292 "nvme_error_stat": false, 00:21:46.292 "rdma_srq_size": 0, 00:21:46.292 "io_path_stat": false, 00:21:46.292 "allow_accel_sequence": false, 00:21:46.292 "rdma_max_cq_size": 0, 00:21:46.292 "rdma_cm_event_timeout_ms": 0, 00:21:46.292 "dhchap_digests": [ 00:21:46.292 "sha256", 00:21:46.292 "sha384", 00:21:46.292 "sha512" 00:21:46.292 ], 00:21:46.292 "dhchap_dhgroups": [ 00:21:46.292 "null", 00:21:46.292 "ffdhe2048", 00:21:46.292 "ffdhe3072", 00:21:46.292 "ffdhe4096", 00:21:46.292 "ffdhe6144", 00:21:46.292 "ffdhe8192" 00:21:46.292 ] 00:21:46.292 } 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "method": "bdev_nvme_set_hotplug", 00:21:46.292 "params": { 00:21:46.292 "period_us": 100000, 00:21:46.292 "enable": false 00:21:46.292 } 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "method": "bdev_malloc_create", 00:21:46.292 "params": { 00:21:46.292 "name": "malloc0", 00:21:46.292 "num_blocks": 8192, 00:21:46.292 "block_size": 4096, 00:21:46.292 "physical_block_size": 4096, 00:21:46.292 "uuid": "89279379-113b-4b2b-9401-30d45d95ff61", 00:21:46.292 "optimal_io_boundary": 0, 00:21:46.292 "md_size": 0, 00:21:46.292 "dif_type": 0, 00:21:46.292 "dif_is_head_of_md": false, 00:21:46.292 "dif_pi_format": 0 00:21:46.292 } 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "method": "bdev_wait_for_examine" 00:21:46.292 } 00:21:46.292 ] 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "subsystem": "scsi", 00:21:46.292 "config": null 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "subsystem": "scheduler", 00:21:46.292 "config": [ 00:21:46.292 { 00:21:46.292 "method": "framework_set_scheduler", 00:21:46.292 "params": { 00:21:46.292 "name": "static" 00:21:46.292 } 00:21:46.292 } 00:21:46.292 ] 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "subsystem": "vhost_scsi", 00:21:46.292 "config": [] 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "subsystem": "vhost_blk", 00:21:46.292 "config": [] 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "subsystem": "ublk", 00:21:46.292 "config": [ 00:21:46.292 { 00:21:46.292 "method": "ublk_create_target", 00:21:46.292 "params": { 00:21:46.292 "cpumask": "1" 00:21:46.292 } 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "method": "ublk_start_disk", 00:21:46.292 "params": { 00:21:46.292 "bdev_name": "malloc0", 00:21:46.292 "ublk_id": 0, 00:21:46.292 "num_queues": 1, 00:21:46.292 "queue_depth": 128 00:21:46.292 } 00:21:46.292 } 00:21:46.292 ] 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "subsystem": "nbd", 00:21:46.292 "config": [] 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "subsystem": "nvmf", 00:21:46.292 "config": [ 00:21:46.292 { 00:21:46.292 "method": "nvmf_set_config", 00:21:46.292 "params": { 00:21:46.292 "discovery_filter": "match_any", 00:21:46.292 "admin_cmd_passthru": { 00:21:46.292 "identify_ctrlr": false 00:21:46.292 }, 00:21:46.292 "dhchap_digests": [ 00:21:46.292 "sha256", 00:21:46.292 "sha384", 00:21:46.292 "sha512" 00:21:46.292 ], 00:21:46.292 "dhchap_dhgroups": [ 00:21:46.292 "null", 00:21:46.292 "ffdhe2048", 00:21:46.292 "ffdhe3072", 00:21:46.292 "ffdhe4096", 00:21:46.292 "ffdhe6144", 00:21:46.292 "ffdhe8192" 00:21:46.292 ] 00:21:46.292 } 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "method": "nvmf_set_max_subsystems", 00:21:46.292 "params": { 00:21:46.292 "max_subsystems": 1024 00:21:46.292 } 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "method": "nvmf_set_crdt", 00:21:46.292 "params": { 00:21:46.292 "crdt1": 0, 00:21:46.292 "crdt2": 0, 00:21:46.292 "crdt3": 0 00:21:46.292 } 00:21:46.292 } 00:21:46.292 ] 00:21:46.292 }, 00:21:46.292 { 00:21:46.292 "subsystem": "iscsi", 00:21:46.292 "config": [ 00:21:46.292 { 00:21:46.292 "method": "iscsi_set_options", 00:21:46.292 "params": { 00:21:46.292 "node_base": "iqn.2016-06.io.spdk", 00:21:46.292 "max_sessions": 128, 00:21:46.292 "max_connections_per_session": 2, 00:21:46.292 "max_queue_depth": 64, 00:21:46.292 "default_time2wait": 2, 00:21:46.292 "default_time2retain": 20, 00:21:46.292 "first_burst_length": 8192, 00:21:46.292 "immediate_data": true, 00:21:46.292 "allow_duplicated_isid": false, 00:21:46.292 "error_recovery_level": 0, 00:21:46.292 "nop_timeout": 60, 00:21:46.292 "nop_in_interval": 30, 00:21:46.292 "disable_chap": false, 00:21:46.292 "require_chap": false, 00:21:46.292 "mutual_chap": false, 00:21:46.292 "chap_group": 0, 00:21:46.292 "max_large_datain_per_connection": 64, 00:21:46.292 "max_r2t_per_connection": 4, 00:21:46.292 "pdu_pool_size": 36864, 00:21:46.292 "immediate_data_pool_size": 16384, 00:21:46.292 "data_out_pool_size": 2048 00:21:46.292 } 00:21:46.292 } 00:21:46.292 ] 00:21:46.292 } 00:21:46.292 ] 00:21:46.292 }' 00:21:46.292 10:26:53 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75121 00:21:46.292 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75121 ']' 00:21:46.292 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75121 00:21:46.292 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:46.292 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.292 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75121 00:21:46.551 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.551 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.551 killing process with pid 75121 00:21:46.551 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75121' 00:21:46.551 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75121 00:21:46.551 10:26:53 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75121 00:21:47.929 [2024-11-25 10:26:54.926936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:47.929 [2024-11-25 10:26:54.958597] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:47.929 [2024-11-25 10:26:54.958721] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:47.929 [2024-11-25 10:26:54.967544] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:47.929 [2024-11-25 10:26:54.967595] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:47.929 [2024-11-25 10:26:54.967610] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:47.929 [2024-11-25 10:26:54.967634] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:47.929 [2024-11-25 10:26:54.967774] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:49.864 10:26:56 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75192 00:21:49.864 10:26:56 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75192 00:21:49.864 10:26:56 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75192 ']' 00:21:49.864 10:26:56 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.864 10:26:56 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.864 10:26:56 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.864 10:26:56 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.864 10:26:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:49.864 10:26:56 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:21:49.864 10:26:56 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:21:49.864 "subsystems": [ 00:21:49.864 { 00:21:49.864 "subsystem": "fsdev", 00:21:49.864 "config": [ 00:21:49.864 { 00:21:49.864 "method": "fsdev_set_opts", 00:21:49.864 "params": { 00:21:49.864 "fsdev_io_pool_size": 65535, 00:21:49.864 "fsdev_io_cache_size": 256 00:21:49.864 } 00:21:49.864 } 00:21:49.864 ] 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "keyring", 00:21:49.864 "config": [] 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "iobuf", 00:21:49.864 "config": [ 00:21:49.864 { 00:21:49.864 "method": "iobuf_set_options", 00:21:49.864 "params": { 00:21:49.864 "small_pool_count": 8192, 00:21:49.864 "large_pool_count": 1024, 00:21:49.864 "small_bufsize": 8192, 00:21:49.864 "large_bufsize": 135168, 00:21:49.864 "enable_numa": false 00:21:49.864 } 00:21:49.864 } 00:21:49.864 ] 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "sock", 00:21:49.864 "config": [ 00:21:49.864 { 00:21:49.864 "method": "sock_set_default_impl", 00:21:49.864 "params": { 00:21:49.864 "impl_name": "posix" 00:21:49.864 } 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "method": "sock_impl_set_options", 00:21:49.864 "params": { 00:21:49.864 "impl_name": "ssl", 00:21:49.864 "recv_buf_size": 4096, 00:21:49.864 "send_buf_size": 4096, 00:21:49.864 "enable_recv_pipe": true, 00:21:49.864 "enable_quickack": false, 00:21:49.864 "enable_placement_id": 0, 00:21:49.864 "enable_zerocopy_send_server": true, 00:21:49.864 "enable_zerocopy_send_client": false, 00:21:49.864 "zerocopy_threshold": 0, 00:21:49.864 "tls_version": 0, 00:21:49.864 "enable_ktls": false 00:21:49.864 } 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "method": "sock_impl_set_options", 00:21:49.864 "params": { 00:21:49.864 "impl_name": "posix", 00:21:49.864 "recv_buf_size": 2097152, 00:21:49.864 "send_buf_size": 2097152, 00:21:49.864 "enable_recv_pipe": true, 00:21:49.864 "enable_quickack": false, 00:21:49.864 "enable_placement_id": 0, 00:21:49.864 "enable_zerocopy_send_server": true, 00:21:49.864 "enable_zerocopy_send_client": false, 00:21:49.864 "zerocopy_threshold": 0, 00:21:49.864 "tls_version": 0, 00:21:49.864 "enable_ktls": false 00:21:49.864 } 00:21:49.864 } 00:21:49.864 ] 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "vmd", 00:21:49.864 "config": [] 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "accel", 00:21:49.864 "config": [ 00:21:49.864 { 00:21:49.864 "method": "accel_set_options", 00:21:49.864 "params": { 00:21:49.864 "small_cache_size": 128, 00:21:49.864 "large_cache_size": 16, 00:21:49.864 "task_count": 2048, 00:21:49.864 "sequence_count": 2048, 00:21:49.864 "buf_count": 2048 00:21:49.864 } 00:21:49.864 } 00:21:49.864 ] 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "bdev", 00:21:49.864 "config": [ 00:21:49.864 { 00:21:49.864 "method": "bdev_set_options", 00:21:49.864 "params": { 00:21:49.864 "bdev_io_pool_size": 65535, 00:21:49.864 "bdev_io_cache_size": 256, 00:21:49.864 "bdev_auto_examine": true, 00:21:49.864 "iobuf_small_cache_size": 128, 00:21:49.864 "iobuf_large_cache_size": 16 00:21:49.864 } 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "method": "bdev_raid_set_options", 00:21:49.864 "params": { 00:21:49.864 "process_window_size_kb": 1024, 00:21:49.864 "process_max_bandwidth_mb_sec": 0 00:21:49.864 } 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "method": "bdev_iscsi_set_options", 00:21:49.864 "params": { 00:21:49.864 "timeout_sec": 30 00:21:49.864 } 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "method": "bdev_nvme_set_options", 00:21:49.864 "params": { 00:21:49.864 "action_on_timeout": "none", 00:21:49.864 "timeout_us": 0, 00:21:49.864 "timeout_admin_us": 0, 00:21:49.864 "keep_alive_timeout_ms": 10000, 00:21:49.864 "arbitration_burst": 0, 00:21:49.864 "low_priority_weight": 0, 00:21:49.864 "medium_priority_weight": 0, 00:21:49.864 "high_priority_weight": 0, 00:21:49.864 "nvme_adminq_poll_period_us": 10000, 00:21:49.864 "nvme_ioq_poll_period_us": 0, 00:21:49.864 "io_queue_requests": 0, 00:21:49.864 "delay_cmd_submit": true, 00:21:49.864 "transport_retry_count": 4, 00:21:49.864 "bdev_retry_count": 3, 00:21:49.864 "transport_ack_timeout": 0, 00:21:49.864 "ctrlr_loss_timeout_sec": 0, 00:21:49.864 "reconnect_delay_sec": 0, 00:21:49.864 "fast_io_fail_timeout_sec": 0, 00:21:49.864 "disable_auto_failback": false, 00:21:49.864 "generate_uuids": false, 00:21:49.864 "transport_tos": 0, 00:21:49.864 "nvme_error_stat": false, 00:21:49.864 "rdma_srq_size": 0, 00:21:49.864 "io_path_stat": false, 00:21:49.864 "allow_accel_sequence": false, 00:21:49.864 "rdma_max_cq_size": 0, 00:21:49.864 "rdma_cm_event_timeout_ms": 0, 00:21:49.864 "dhchap_digests": [ 00:21:49.864 "sha256", 00:21:49.864 "sha384", 00:21:49.864 "sha512" 00:21:49.864 ], 00:21:49.864 "dhchap_dhgroups": [ 00:21:49.864 "null", 00:21:49.864 "ffdhe2048", 00:21:49.864 "ffdhe3072", 00:21:49.864 "ffdhe4096", 00:21:49.864 "ffdhe6144", 00:21:49.864 "ffdhe8192" 00:21:49.864 ] 00:21:49.864 } 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "method": "bdev_nvme_set_hotplug", 00:21:49.864 "params": { 00:21:49.864 "period_us": 100000, 00:21:49.864 "enable": false 00:21:49.864 } 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "method": "bdev_malloc_create", 00:21:49.864 "params": { 00:21:49.864 "name": "malloc0", 00:21:49.864 "num_blocks": 8192, 00:21:49.864 "block_size": 4096, 00:21:49.864 "physical_block_size": 4096, 00:21:49.864 "uuid": "89279379-113b-4b2b-9401-30d45d95ff61", 00:21:49.864 "optimal_io_boundary": 0, 00:21:49.864 "md_size": 0, 00:21:49.864 "dif_type": 0, 00:21:49.864 "dif_is_head_of_md": false, 00:21:49.864 "dif_pi_format": 0 00:21:49.864 } 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "method": "bdev_wait_for_examine" 00:21:49.864 } 00:21:49.864 ] 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "scsi", 00:21:49.864 "config": null 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "scheduler", 00:21:49.864 "config": [ 00:21:49.864 { 00:21:49.864 "method": "framework_set_scheduler", 00:21:49.864 "params": { 00:21:49.864 "name": "static" 00:21:49.864 } 00:21:49.864 } 00:21:49.864 ] 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "vhost_scsi", 00:21:49.864 "config": [] 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "vhost_blk", 00:21:49.864 "config": [] 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "subsystem": "ublk", 00:21:49.864 "config": [ 00:21:49.864 { 00:21:49.864 "method": "ublk_create_target", 00:21:49.864 "params": { 00:21:49.864 "cpumask": "1" 00:21:49.864 } 00:21:49.864 }, 00:21:49.864 { 00:21:49.864 "method": "ublk_start_disk", 00:21:49.864 "params": { 00:21:49.864 "bdev_name": "malloc0", 00:21:49.864 "ublk_id": 0, 00:21:49.864 "num_queues": 1, 00:21:49.864 "queue_depth": 128 00:21:49.864 } 00:21:49.864 } 00:21:49.865 ] 00:21:49.865 }, 00:21:49.865 { 00:21:49.865 "subsystem": "nbd", 00:21:49.865 "config": [] 00:21:49.865 }, 00:21:49.865 { 00:21:49.865 "subsystem": "nvmf", 00:21:49.865 "config": [ 00:21:49.865 { 00:21:49.865 "method": "nvmf_set_config", 00:21:49.865 "params": { 00:21:49.865 "discovery_filter": "match_any", 00:21:49.865 "admin_cmd_passthru": { 00:21:49.865 "identify_ctrlr": false 00:21:49.865 }, 00:21:49.865 "dhchap_digests": [ 00:21:49.865 "sha256", 00:21:49.865 "sha384", 00:21:49.865 "sha512" 00:21:49.865 ], 00:21:49.865 "dhchap_dhgroups": [ 00:21:49.865 "null", 00:21:49.865 "ffdhe2048", 00:21:49.865 "ffdhe3072", 00:21:49.865 "ffdhe4096", 00:21:49.865 "ffdhe6144", 00:21:49.865 "ffdhe8192" 00:21:49.865 ] 00:21:49.865 } 00:21:49.865 }, 00:21:49.865 { 00:21:49.865 "method": "nvmf_set_max_subsystems", 00:21:49.865 "params": { 00:21:49.865 "max_subsystems": 1024 00:21:49.865 } 00:21:49.865 }, 00:21:49.865 { 00:21:49.865 "method": "nvmf_set_crdt", 00:21:49.865 "params": { 00:21:49.865 "crdt1": 0, 00:21:49.865 "crdt2": 0, 00:21:49.865 "crdt3": 0 00:21:49.865 } 00:21:49.865 } 00:21:49.865 ] 00:21:49.865 }, 00:21:49.865 { 00:21:49.865 "subsystem": "iscsi", 00:21:49.865 "config": [ 00:21:49.865 { 00:21:49.865 "method": "iscsi_set_options", 00:21:49.865 "params": { 00:21:49.865 "node_base": "iqn.2016-06.io.spdk", 00:21:49.865 "max_sessions": 128, 00:21:49.865 "max_connections_per_session": 2, 00:21:49.865 "max_queue_depth": 64, 00:21:49.865 "default_time2wait": 2, 00:21:49.865 "default_time2retain": 20, 00:21:49.865 "first_burst_length": 8192, 00:21:49.865 "immediate_data": true, 00:21:49.865 "allow_duplicated_isid": false, 00:21:49.865 "error_recovery_level": 0, 00:21:49.865 "nop_timeout": 60, 00:21:49.865 "nop_in_interval": 30, 00:21:49.865 "disable_chap": false, 00:21:49.865 "require_chap": false, 00:21:49.865 "mutual_chap": false, 00:21:49.865 "chap_group": 0, 00:21:49.865 "max_large_datain_per_connection": 64, 00:21:49.865 "max_r2t_per_connection": 4, 00:21:49.865 "pdu_pool_size": 36864, 00:21:49.865 "immediate_data_pool_size": 16384, 00:21:49.865 "data_out_pool_size": 2048 00:21:49.865 } 00:21:49.865 } 00:21:49.865 ] 00:21:49.865 } 00:21:49.865 ] 00:21:49.865 }' 00:21:50.189 [2024-11-25 10:26:57.007607] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:21:50.189 [2024-11-25 10:26:57.007735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75192 ] 00:21:50.189 [2024-11-25 10:26:57.195257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.448 [2024-11-25 10:26:57.324194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.384 [2024-11-25 10:26:58.451512] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:51.384 [2024-11-25 10:26:58.452741] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:51.384 [2024-11-25 10:26:58.459668] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:51.384 [2024-11-25 10:26:58.459762] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:51.384 [2024-11-25 10:26:58.459776] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:51.384 [2024-11-25 10:26:58.459785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:51.384 [2024-11-25 10:26:58.466616] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:51.384 [2024-11-25 10:26:58.466652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:51.384 [2024-11-25 10:26:58.474567] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:51.384 [2024-11-25 10:26:58.474676] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:51.644 [2024-11-25 10:26:58.498540] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75192 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75192 ']' 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75192 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75192 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75192' 00:21:51.644 killing process with pid 75192 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75192 00:21:51.644 10:26:58 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75192 00:21:53.549 [2024-11-25 10:27:00.272816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:53.549 [2024-11-25 10:27:00.305547] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:53.549 [2024-11-25 10:27:00.305702] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:53.549 [2024-11-25 10:27:00.314523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:53.549 [2024-11-25 10:27:00.314591] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:53.549 [2024-11-25 10:27:00.314602] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:53.549 [2024-11-25 10:27:00.314626] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:53.549 [2024-11-25 10:27:00.314776] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:55.453 10:27:02 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:21:55.453 00:21:55.453 real 0m10.692s 00:21:55.453 user 0m8.379s 00:21:55.453 sys 0m3.173s 00:21:55.453 10:27:02 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.453 10:27:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:55.453 ************************************ 00:21:55.453 END TEST test_save_ublk_config 00:21:55.453 ************************************ 00:21:55.453 10:27:02 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75284 00:21:55.453 10:27:02 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:55.453 10:27:02 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:55.453 10:27:02 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75284 00:21:55.453 10:27:02 ublk -- common/autotest_common.sh@835 -- # '[' -z 75284 ']' 00:21:55.453 10:27:02 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.453 10:27:02 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.453 10:27:02 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.453 10:27:02 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.453 10:27:02 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:55.453 [2024-11-25 10:27:02.406004] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:21:55.453 [2024-11-25 10:27:02.406134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75284 ] 00:21:55.712 [2024-11-25 10:27:02.593372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:55.712 [2024-11-25 10:27:02.725727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.712 [2024-11-25 10:27:02.725765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.650 10:27:03 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.650 10:27:03 ublk -- common/autotest_common.sh@868 -- # return 0 00:21:56.650 10:27:03 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:21:56.650 10:27:03 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:56.650 10:27:03 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.650 10:27:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:56.650 ************************************ 00:21:56.650 START TEST test_create_ublk 00:21:56.650 ************************************ 00:21:56.650 10:27:03 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:21:56.650 10:27:03 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:21:56.650 10:27:03 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.650 10:27:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:56.650 [2024-11-25 10:27:03.723523] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:56.650 [2024-11-25 10:27:03.726474] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:56.650 10:27:03 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.650 10:27:03 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:21:56.650 10:27:03 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:21:56.650 10:27:03 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.650 10:27:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:57.218 10:27:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.218 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:21:57.218 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:57.218 10:27:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.218 10:27:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:57.218 [2024-11-25 10:27:04.040733] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:57.218 [2024-11-25 10:27:04.041311] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:57.219 [2024-11-25 10:27:04.041341] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:57.219 [2024-11-25 10:27:04.041354] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:57.219 [2024-11-25 10:27:04.049896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:57.219 [2024-11-25 10:27:04.049939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:57.219 [2024-11-25 10:27:04.056576] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:57.219 [2024-11-25 10:27:04.068614] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:57.219 [2024-11-25 10:27:04.087535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:57.219 10:27:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:21:57.219 10:27:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.219 10:27:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:57.219 10:27:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:21:57.219 { 00:21:57.219 "ublk_device": "/dev/ublkb0", 00:21:57.219 "id": 0, 00:21:57.219 "queue_depth": 512, 00:21:57.219 "num_queues": 4, 00:21:57.219 "bdev_name": "Malloc0" 00:21:57.219 } 00:21:57.219 ]' 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:21:57.219 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:21:57.477 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:57.477 10:27:04 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:57.477 10:27:04 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:21:57.477 fio: verification read phase will never start because write phase uses all of runtime 00:21:57.477 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:21:57.477 fio-3.35 00:21:57.477 Starting 1 process 00:22:09.685 00:22:09.685 fio_test: (groupid=0, jobs=1): err= 0: pid=75336: Mon Nov 25 10:27:14 2024 00:22:09.685 write: IOPS=16.1k, BW=62.8MiB/s (65.9MB/s)(628MiB/10003msec); 0 zone resets 00:22:09.685 clat (usec): min=38, max=4087, avg=61.35, stdev=98.83 00:22:09.685 lat (usec): min=39, max=4087, avg=61.82, stdev=98.84 00:22:09.685 clat percentiles (usec): 00:22:09.685 | 1.00th=[ 41], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:22:09.685 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:22:09.685 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 63], 95.00th=[ 66], 00:22:09.685 | 99.00th=[ 76], 99.50th=[ 86], 99.90th=[ 2040], 99.95th=[ 2769], 00:22:09.685 | 99.99th=[ 3654] 00:22:09.685 bw ( KiB/s): min=61048, max=71128, per=100.00%, avg=64424.42, stdev=1932.91, samples=19 00:22:09.685 iops : min=15262, max=17782, avg=16106.11, stdev=483.23, samples=19 00:22:09.685 lat (usec) : 50=3.19%, 100=96.44%, 250=0.14%, 500=0.02%, 750=0.02% 00:22:09.685 lat (usec) : 1000=0.01% 00:22:09.685 lat (msec) : 2=0.07%, 4=0.10%, 10=0.01% 00:22:09.685 cpu : usr=3.22%, sys=10.70%, ctx=160868, majf=0, minf=795 00:22:09.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:09.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.685 issued rwts: total=0,160873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:09.685 00:22:09.685 Run status group 0 (all jobs): 00:22:09.685 WRITE: bw=62.8MiB/s (65.9MB/s), 62.8MiB/s-62.8MiB/s (65.9MB/s-65.9MB/s), io=628MiB (659MB), run=10003-10003msec 00:22:09.685 00:22:09.685 Disk stats (read/write): 00:22:09.685 ublkb0: ios=0/159207, merge=0/0, ticks=0/8561, in_queue=8561, util=99.13% 00:22:09.685 10:27:14 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.685 [2024-11-25 10:27:14.593338] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:09.685 [2024-11-25 10:27:14.631560] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:09.685 [2024-11-25 10:27:14.632289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:09.685 [2024-11-25 10:27:14.641561] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:09.685 [2024-11-25 10:27:14.641879] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:09.685 [2024-11-25 10:27:14.641900] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.685 10:27:14 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.685 [2024-11-25 10:27:14.658603] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:22:09.685 request: 00:22:09.685 { 00:22:09.685 "ublk_id": 0, 00:22:09.685 "method": "ublk_stop_disk", 00:22:09.685 "req_id": 1 00:22:09.685 } 00:22:09.685 Got JSON-RPC error response 00:22:09.685 response: 00:22:09.685 { 00:22:09.685 "code": -19, 00:22:09.685 "message": "No such device" 00:22:09.685 } 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.685 10:27:14 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.685 [2024-11-25 10:27:14.681610] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:09.685 [2024-11-25 10:27:14.689507] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:09.685 [2024-11-25 10:27:14.689556] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.685 10:27:14 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.685 10:27:14 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.685 10:27:15 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.685 10:27:15 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:22:09.685 10:27:15 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:09.685 10:27:15 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.685 10:27:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.685 10:27:15 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.685 10:27:15 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:09.685 10:27:15 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:22:09.685 10:27:15 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:09.685 10:27:15 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:09.685 10:27:15 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.685 10:27:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.685 10:27:15 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.685 10:27:15 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:09.685 10:27:15 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:22:09.685 ************************************ 00:22:09.685 END TEST test_create_ublk 00:22:09.685 ************************************ 00:22:09.685 10:27:15 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:09.685 00:22:09.685 real 0m11.842s 00:22:09.685 user 0m0.716s 00:22:09.685 sys 0m1.199s 00:22:09.685 10:27:15 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.685 10:27:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.685 10:27:15 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:22:09.685 10:27:15 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:09.685 10:27:15 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.685 10:27:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.685 ************************************ 00:22:09.685 START TEST test_create_multi_ublk 00:22:09.685 ************************************ 00:22:09.685 10:27:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:22:09.685 10:27:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:22:09.685 10:27:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.685 10:27:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.686 [2024-11-25 10:27:15.634509] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:09.686 [2024-11-25 10:27:15.637046] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:09.686 10:27:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.686 10:27:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:22:09.686 10:27:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:22:09.686 10:27:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:09.686 10:27:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:22:09.686 10:27:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.686 10:27:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.686 [2024-11-25 10:27:16.041662] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:09.686 [2024-11-25 10:27:16.042101] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:09.686 [2024-11-25 10:27:16.042118] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:09.686 [2024-11-25 10:27:16.042132] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:09.686 [2024-11-25 10:27:16.065529] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:09.686 [2024-11-25 10:27:16.065569] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:09.686 [2024-11-25 10:27:16.077524] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:09.686 [2024-11-25 10:27:16.078181] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:09.686 [2024-11-25 10:27:16.129531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.686 [2024-11-25 10:27:16.551652] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:22:09.686 [2024-11-25 10:27:16.552090] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:22:09.686 [2024-11-25 10:27:16.552109] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:09.686 [2024-11-25 10:27:16.552117] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:09.686 [2024-11-25 10:27:16.559548] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:09.686 [2024-11-25 10:27:16.559573] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:09.686 [2024-11-25 10:27:16.567522] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:09.686 [2024-11-25 10:27:16.568094] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:09.686 [2024-11-25 10:27:16.580558] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.686 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:09.945 [2024-11-25 10:27:16.890637] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:22:09.945 [2024-11-25 10:27:16.891082] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:22:09.945 [2024-11-25 10:27:16.891099] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:22:09.945 [2024-11-25 10:27:16.891110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:22:09.945 [2024-11-25 10:27:16.898568] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:09.945 [2024-11-25 10:27:16.898597] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:09.945 [2024-11-25 10:27:16.906520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:09.945 [2024-11-25 10:27:16.907094] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:22:09.945 [2024-11-25 10:27:16.915561] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.945 10:27:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:10.264 [2024-11-25 10:27:17.226661] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:22:10.264 [2024-11-25 10:27:17.227111] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:22:10.264 [2024-11-25 10:27:17.227131] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:22:10.264 [2024-11-25 10:27:17.227139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:22:10.264 [2024-11-25 10:27:17.234549] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:10.264 [2024-11-25 10:27:17.234575] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:10.264 [2024-11-25 10:27:17.242541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:10.264 [2024-11-25 10:27:17.243104] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:22:10.264 [2024-11-25 10:27:17.251563] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:22:10.264 { 00:22:10.264 "ublk_device": "/dev/ublkb0", 00:22:10.264 "id": 0, 00:22:10.264 "queue_depth": 512, 00:22:10.264 "num_queues": 4, 00:22:10.264 "bdev_name": "Malloc0" 00:22:10.264 }, 00:22:10.264 { 00:22:10.264 "ublk_device": "/dev/ublkb1", 00:22:10.264 "id": 1, 00:22:10.264 "queue_depth": 512, 00:22:10.264 "num_queues": 4, 00:22:10.264 "bdev_name": "Malloc1" 00:22:10.264 }, 00:22:10.264 { 00:22:10.264 "ublk_device": "/dev/ublkb2", 00:22:10.264 "id": 2, 00:22:10.264 "queue_depth": 512, 00:22:10.264 "num_queues": 4, 00:22:10.264 "bdev_name": "Malloc2" 00:22:10.264 }, 00:22:10.264 { 00:22:10.264 "ublk_device": "/dev/ublkb3", 00:22:10.264 "id": 3, 00:22:10.264 "queue_depth": 512, 00:22:10.264 "num_queues": 4, 00:22:10.264 "bdev_name": "Malloc3" 00:22:10.264 } 00:22:10.264 ]' 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:10.264 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:10.546 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:22:10.805 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:10.805 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:22:10.805 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:22:10.805 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:10.805 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:22:10.806 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:22:10.806 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:22:10.806 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:22:10.806 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:22:10.806 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:10.806 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:22:10.806 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:10.806 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:22:11.065 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:22:11.065 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:11.065 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:22:11.065 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:22:11.065 10:27:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.065 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:11.065 [2024-11-25 10:27:18.169641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:11.325 [2024-11-25 10:27:18.202947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:11.325 [2024-11-25 10:27:18.204002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:11.325 [2024-11-25 10:27:18.209543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:11.325 [2024-11-25 10:27:18.209831] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:11.325 [2024-11-25 10:27:18.209845] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:11.325 [2024-11-25 10:27:18.224585] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:11.325 [2024-11-25 10:27:18.265567] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:11.325 [2024-11-25 10:27:18.266380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:11.325 [2024-11-25 10:27:18.273537] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:11.325 [2024-11-25 10:27:18.273804] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:11.325 [2024-11-25 10:27:18.273818] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:11.325 [2024-11-25 10:27:18.288612] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:22:11.325 [2024-11-25 10:27:18.328554] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:11.325 [2024-11-25 10:27:18.329344] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:22:11.325 [2024-11-25 10:27:18.337567] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:11.325 [2024-11-25 10:27:18.337844] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:22:11.325 [2024-11-25 10:27:18.337857] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:11.325 [2024-11-25 10:27:18.351627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:22:11.325 [2024-11-25 10:27:18.392554] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:11.325 [2024-11-25 10:27:18.393250] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:22:11.325 [2024-11-25 10:27:18.400536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:11.325 [2024-11-25 10:27:18.400806] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:22:11.325 [2024-11-25 10:27:18.400832] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.325 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:22:11.585 [2024-11-25 10:27:18.600613] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:11.585 [2024-11-25 10:27:18.608513] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:11.585 [2024-11-25 10:27:18.608557] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:11.585 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:22:11.585 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:11.585 10:27:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:11.585 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.585 10:27:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.522 10:27:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.522 10:27:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:12.522 10:27:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:12.522 10:27:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.522 10:27:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:12.782 10:27:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.782 10:27:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:12.782 10:27:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:12.782 10:27:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.782 10:27:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:13.041 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.041 10:27:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:13.041 10:27:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:22:13.041 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.041 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:13.611 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.612 10:27:20 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:13.612 10:27:20 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:22:13.612 ************************************ 00:22:13.612 END TEST test_create_multi_ublk 00:22:13.612 ************************************ 00:22:13.612 10:27:20 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:13.612 00:22:13.612 real 0m4.972s 00:22:13.612 user 0m1.027s 00:22:13.612 sys 0m0.238s 00:22:13.612 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.612 10:27:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:13.612 10:27:20 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:13.612 10:27:20 ublk -- ublk/ublk.sh@147 -- # cleanup 00:22:13.612 10:27:20 ublk -- ublk/ublk.sh@130 -- # killprocess 75284 00:22:13.612 10:27:20 ublk -- common/autotest_common.sh@954 -- # '[' -z 75284 ']' 00:22:13.612 10:27:20 ublk -- common/autotest_common.sh@958 -- # kill -0 75284 00:22:13.612 10:27:20 ublk -- common/autotest_common.sh@959 -- # uname 00:22:13.612 10:27:20 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.612 10:27:20 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75284 00:22:13.612 killing process with pid 75284 00:22:13.612 10:27:20 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.612 10:27:20 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.612 10:27:20 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75284' 00:22:13.612 10:27:20 ublk -- common/autotest_common.sh@973 -- # kill 75284 00:22:13.612 10:27:20 ublk -- common/autotest_common.sh@978 -- # wait 75284 00:22:14.996 [2024-11-25 10:27:21.865904] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:14.996 [2024-11-25 10:27:21.865952] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:16.405 00:22:16.405 real 0m31.873s 00:22:16.405 user 0m45.611s 00:22:16.405 sys 0m10.851s 00:22:16.405 10:27:23 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.405 10:27:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.405 ************************************ 00:22:16.405 END TEST ublk 00:22:16.405 ************************************ 00:22:16.405 10:27:23 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:16.405 10:27:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:16.405 10:27:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.405 10:27:23 -- common/autotest_common.sh@10 -- # set +x 00:22:16.405 ************************************ 00:22:16.405 START TEST ublk_recovery 00:22:16.405 ************************************ 00:22:16.405 10:27:23 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:16.405 * Looking for test storage... 00:22:16.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:16.405 10:27:23 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:16.405 10:27:23 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:16.405 10:27:23 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:16.405 10:27:23 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.405 10:27:23 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:22:16.405 10:27:23 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.405 10:27:23 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:16.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.405 --rc genhtml_branch_coverage=1 00:22:16.405 --rc genhtml_function_coverage=1 00:22:16.405 --rc genhtml_legend=1 00:22:16.405 --rc geninfo_all_blocks=1 00:22:16.405 --rc geninfo_unexecuted_blocks=1 00:22:16.405 00:22:16.405 ' 00:22:16.405 10:27:23 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:16.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.405 --rc genhtml_branch_coverage=1 00:22:16.405 --rc genhtml_function_coverage=1 00:22:16.405 --rc genhtml_legend=1 00:22:16.405 --rc geninfo_all_blocks=1 00:22:16.405 --rc geninfo_unexecuted_blocks=1 00:22:16.405 00:22:16.405 ' 00:22:16.405 10:27:23 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:16.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.405 --rc genhtml_branch_coverage=1 00:22:16.405 --rc genhtml_function_coverage=1 00:22:16.405 --rc genhtml_legend=1 00:22:16.405 --rc geninfo_all_blocks=1 00:22:16.405 --rc geninfo_unexecuted_blocks=1 00:22:16.405 00:22:16.405 ' 00:22:16.405 10:27:23 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:16.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.405 --rc genhtml_branch_coverage=1 00:22:16.405 --rc genhtml_function_coverage=1 00:22:16.405 --rc genhtml_legend=1 00:22:16.405 --rc geninfo_all_blocks=1 00:22:16.405 --rc geninfo_unexecuted_blocks=1 00:22:16.405 00:22:16.405 ' 00:22:16.405 10:27:23 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:16.405 10:27:23 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:16.405 10:27:23 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:16.405 10:27:23 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:16.405 10:27:23 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:16.405 10:27:23 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:16.405 10:27:23 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:16.405 10:27:23 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:16.405 10:27:23 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:16.405 10:27:23 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:22:16.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.405 10:27:23 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75715 00:22:16.405 10:27:23 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.406 10:27:23 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:16.406 10:27:23 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75715 00:22:16.406 10:27:23 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75715 ']' 00:22:16.406 10:27:23 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.406 10:27:23 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.406 10:27:23 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.406 10:27:23 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.406 10:27:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:16.664 [2024-11-25 10:27:23.539737] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:22:16.664 [2024-11-25 10:27:23.540074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75715 ] 00:22:16.664 [2024-11-25 10:27:23.709631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:16.923 [2024-11-25 10:27:23.885433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.923 [2024-11-25 10:27:23.885467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:17.861 10:27:24 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.861 [2024-11-25 10:27:24.750514] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:17.861 [2024-11-25 10:27:24.753232] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.861 10:27:24 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.861 malloc0 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.861 10:27:24 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:17.861 [2024-11-25 10:27:24.910673] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:22:17.861 [2024-11-25 10:27:24.910789] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:22:17.861 [2024-11-25 10:27:24.910804] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:17.861 [2024-11-25 10:27:24.910815] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:17.861 [2024-11-25 10:27:24.919614] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:17.861 [2024-11-25 10:27:24.919641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:17.861 [2024-11-25 10:27:24.926523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:17.861 [2024-11-25 10:27:24.926673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:17.861 [2024-11-25 10:27:24.948535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:17.861 1 00:22:17.861 10:27:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.861 10:27:24 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:22:19.239 10:27:25 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75753 00:22:19.239 10:27:25 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:22:19.239 10:27:25 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:22:19.239 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:19.239 fio-3.35 00:22:19.239 Starting 1 process 00:22:24.512 10:27:30 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75715 00:22:24.512 10:27:30 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:22:29.785 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75715 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:22:29.785 10:27:35 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75862 00:22:29.785 10:27:35 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:29.785 10:27:35 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:29.785 10:27:35 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75862 00:22:29.785 10:27:35 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75862 ']' 00:22:29.785 10:27:35 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.785 10:27:35 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.785 10:27:35 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.785 10:27:35 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.785 10:27:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.785 [2024-11-25 10:27:36.085131] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:22:29.785 [2024-11-25 10:27:36.085456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75862 ] 00:22:29.785 [2024-11-25 10:27:36.268232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:29.785 [2024-11-25 10:27:36.393348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.785 [2024-11-25 10:27:36.393385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:30.353 10:27:37 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.353 [2024-11-25 10:27:37.273516] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:30.353 [2024-11-25 10:27:37.276237] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.353 10:27:37 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.353 malloc0 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.353 10:27:37 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.353 [2024-11-25 10:27:37.422710] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:22:30.353 [2024-11-25 10:27:37.422763] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:30.353 [2024-11-25 10:27:37.422785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:30.353 [2024-11-25 10:27:37.430560] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:30.353 [2024-11-25 10:27:37.430594] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:30.353 1 00:22:30.353 10:27:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.353 10:27:37 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75753 00:22:31.730 [2024-11-25 10:27:38.432532] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:31.730 [2024-11-25 10:27:38.440524] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:31.730 [2024-11-25 10:27:38.440549] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:32.662 [2024-11-25 10:27:39.438970] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:32.662 [2024-11-25 10:27:39.444529] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:32.662 [2024-11-25 10:27:39.444560] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:33.597 [2024-11-25 10:27:40.443016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:33.597 [2024-11-25 10:27:40.446532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:33.597 [2024-11-25 10:27:40.446552] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:22:33.597 [2024-11-25 10:27:40.446566] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:33.597 [2024-11-25 10:27:40.446708] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:22:55.533 [2024-11-25 10:28:01.199555] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:22:55.533 [2024-11-25 10:28:01.205725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:22:55.533 [2024-11-25 10:28:01.211763] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:22:55.533 [2024-11-25 10:28:01.211797] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:23:22.120 00:23:22.120 fio_test: (groupid=0, jobs=1): err= 0: pid=75759: Mon Nov 25 10:28:26 2024 00:23:22.120 read: IOPS=12.4k, BW=48.3MiB/s (50.7MB/s)(2900MiB/60002msec) 00:23:22.120 slat (nsec): min=1959, max=362536, avg=7150.11, stdev=2239.99 00:23:22.120 clat (usec): min=994, max=30256k, avg=5003.65, stdev=274185.75 00:23:22.120 lat (usec): min=1000, max=30256k, avg=5010.80, stdev=274185.75 00:23:22.120 clat percentiles (usec): 00:23:22.120 | 1.00th=[ 1958], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2278], 00:23:22.120 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:23:22.120 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2868], 95.00th=[ 3752], 00:23:22.120 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 7373], 99.95th=[ 8291], 00:23:22.120 | 99.99th=[13173] 00:23:22.120 bw ( KiB/s): min= 1512, max=104856, per=100.00%, avg=97534.48, stdev=16246.59, samples=60 00:23:22.120 iops : min= 378, max=26214, avg=24383.58, stdev=4061.66, samples=60 00:23:22.120 write: IOPS=12.4k, BW=48.3MiB/s (50.6MB/s)(2895MiB/60002msec); 0 zone resets 00:23:22.120 slat (usec): min=2, max=200, avg= 7.22, stdev= 2.21 00:23:22.120 clat (usec): min=938, max=30256k, avg=5333.52, stdev=287610.39 00:23:22.120 lat (usec): min=944, max=30256k, avg=5340.74, stdev=287610.39 00:23:22.120 clat percentiles (usec): 00:23:22.120 | 1.00th=[ 1975], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2376], 00:23:22.120 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:23:22.120 | 70.00th=[ 2540], 80.00th=[ 2606], 90.00th=[ 2868], 95.00th=[ 3752], 00:23:22.120 | 99.00th=[ 5342], 99.50th=[ 5932], 99.90th=[ 7504], 99.95th=[ 8586], 00:23:22.120 | 99.99th=[13173] 00:23:22.120 bw ( KiB/s): min= 1496, max=104128, per=100.00%, avg=97398.95, stdev=16040.07, samples=60 00:23:22.120 iops : min= 374, max=26032, avg=24349.68, stdev=4010.03, samples=60 00:23:22.120 lat (usec) : 1000=0.01% 00:23:22.120 lat (msec) : 2=1.40%, 4=94.62%, 10=3.94%, 20=0.02%, >=2000=0.01% 00:23:22.120 cpu : usr=6.94%, sys=17.67%, ctx=64353, majf=0, minf=13 00:23:22.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:22.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:22.120 issued rwts: total=742511,741178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:22.120 00:23:22.120 Run status group 0 (all jobs): 00:23:22.120 READ: bw=48.3MiB/s (50.7MB/s), 48.3MiB/s-48.3MiB/s (50.7MB/s-50.7MB/s), io=2900MiB (3041MB), run=60002-60002msec 00:23:22.120 WRITE: bw=48.3MiB/s (50.6MB/s), 48.3MiB/s-48.3MiB/s (50.6MB/s-50.6MB/s), io=2895MiB (3036MB), run=60002-60002msec 00:23:22.120 00:23:22.120 Disk stats (read/write): 00:23:22.120 ublkb1: ios=739652/738431, merge=0/0, ticks=3646982/3811291, in_queue=7458273, util=99.96% 00:23:22.120 10:28:26 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.120 [2024-11-25 10:28:26.239186] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:22.120 [2024-11-25 10:28:26.275649] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:22.120 [2024-11-25 10:28:26.275869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:22.120 [2024-11-25 10:28:26.284534] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:22.120 [2024-11-25 10:28:26.284740] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:22.120 [2024-11-25 10:28:26.284754] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.120 10:28:26 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.120 [2024-11-25 10:28:26.291732] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:22.120 [2024-11-25 10:28:26.299519] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:22.120 [2024-11-25 10:28:26.299585] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.120 10:28:26 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:23:22.120 10:28:26 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:23:22.120 10:28:26 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75862 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75862 ']' 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75862 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75862 00:23:22.120 killing process with pid 75862 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75862' 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75862 00:23:22.120 10:28:26 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75862 00:23:22.120 [2024-11-25 10:28:27.980717] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:22.120 [2024-11-25 10:28:27.980777] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:22.379 ************************************ 00:23:22.379 END TEST ublk_recovery 00:23:22.379 ************************************ 00:23:22.379 00:23:22.379 real 1m6.206s 00:23:22.379 user 1m52.081s 00:23:22.379 sys 0m24.290s 00:23:22.379 10:28:29 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.379 10:28:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.379 10:28:29 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:23:22.379 10:28:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:22.379 10:28:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:22.379 10:28:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.379 10:28:29 -- common/autotest_common.sh@10 -- # set +x 00:23:22.638 10:28:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:22.638 10:28:29 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:22.638 10:28:29 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:22.638 10:28:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:22.638 10:28:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:22.638 10:28:29 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:22.638 10:28:29 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:22.638 10:28:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:22.638 10:28:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:22.638 10:28:29 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:23:22.638 10:28:29 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:22.638 10:28:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:22.638 10:28:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.638 10:28:29 -- common/autotest_common.sh@10 -- # set +x 00:23:22.638 ************************************ 00:23:22.638 START TEST ftl 00:23:22.638 ************************************ 00:23:22.638 10:28:29 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:22.638 * Looking for test storage... 00:23:22.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:22.638 10:28:29 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:22.638 10:28:29 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:23:22.638 10:28:29 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:22.638 10:28:29 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:22.638 10:28:29 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.638 10:28:29 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.638 10:28:29 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.638 10:28:29 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.638 10:28:29 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.638 10:28:29 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.638 10:28:29 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.638 10:28:29 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.638 10:28:29 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.638 10:28:29 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.898 10:28:29 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.898 10:28:29 ftl -- scripts/common.sh@344 -- # case "$op" in 00:23:22.898 10:28:29 ftl -- scripts/common.sh@345 -- # : 1 00:23:22.898 10:28:29 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.898 10:28:29 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.898 10:28:29 ftl -- scripts/common.sh@365 -- # decimal 1 00:23:22.898 10:28:29 ftl -- scripts/common.sh@353 -- # local d=1 00:23:22.898 10:28:29 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.898 10:28:29 ftl -- scripts/common.sh@355 -- # echo 1 00:23:22.898 10:28:29 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.898 10:28:29 ftl -- scripts/common.sh@366 -- # decimal 2 00:23:22.898 10:28:29 ftl -- scripts/common.sh@353 -- # local d=2 00:23:22.898 10:28:29 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.898 10:28:29 ftl -- scripts/common.sh@355 -- # echo 2 00:23:22.898 10:28:29 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.898 10:28:29 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.898 10:28:29 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.898 10:28:29 ftl -- scripts/common.sh@368 -- # return 0 00:23:22.898 10:28:29 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.898 10:28:29 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:22.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.898 --rc genhtml_branch_coverage=1 00:23:22.898 --rc genhtml_function_coverage=1 00:23:22.898 --rc genhtml_legend=1 00:23:22.898 --rc geninfo_all_blocks=1 00:23:22.898 --rc geninfo_unexecuted_blocks=1 00:23:22.898 00:23:22.898 ' 00:23:22.898 10:28:29 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:22.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.898 --rc genhtml_branch_coverage=1 00:23:22.898 --rc genhtml_function_coverage=1 00:23:22.898 --rc genhtml_legend=1 00:23:22.898 --rc geninfo_all_blocks=1 00:23:22.898 --rc geninfo_unexecuted_blocks=1 00:23:22.898 00:23:22.898 ' 00:23:22.898 10:28:29 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:22.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.898 --rc genhtml_branch_coverage=1 00:23:22.898 --rc genhtml_function_coverage=1 00:23:22.898 --rc genhtml_legend=1 00:23:22.898 --rc geninfo_all_blocks=1 00:23:22.898 --rc geninfo_unexecuted_blocks=1 00:23:22.898 00:23:22.898 ' 00:23:22.898 10:28:29 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:22.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.898 --rc genhtml_branch_coverage=1 00:23:22.898 --rc genhtml_function_coverage=1 00:23:22.898 --rc genhtml_legend=1 00:23:22.898 --rc geninfo_all_blocks=1 00:23:22.898 --rc geninfo_unexecuted_blocks=1 00:23:22.898 00:23:22.898 ' 00:23:22.898 10:28:29 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:22.898 10:28:29 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:22.898 10:28:29 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:22.898 10:28:29 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:22.898 10:28:29 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:22.898 10:28:29 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:22.898 10:28:29 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:22.898 10:28:29 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:22.898 10:28:29 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:22.898 10:28:29 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:22.898 10:28:29 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:22.898 10:28:29 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:22.898 10:28:29 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:22.899 10:28:29 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:22.899 10:28:29 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:22.899 10:28:29 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:22.899 10:28:29 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:22.899 10:28:29 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:22.899 10:28:29 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:22.899 10:28:29 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:22.899 10:28:29 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:22.899 10:28:29 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:22.899 10:28:29 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:22.899 10:28:29 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:22.899 10:28:29 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:22.899 10:28:29 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:22.899 10:28:29 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:22.899 10:28:29 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.899 10:28:29 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:22.899 10:28:29 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:22.899 10:28:29 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:23:22.899 10:28:29 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:23:22.899 10:28:29 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:23:22.899 10:28:29 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:23:22.899 10:28:29 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:23.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:23.467 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:23.467 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:23.467 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:23.467 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:23.726 10:28:30 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76669 00:23:23.726 10:28:30 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:23.726 10:28:30 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76669 00:23:23.726 10:28:30 ftl -- common/autotest_common.sh@835 -- # '[' -z 76669 ']' 00:23:23.726 10:28:30 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.726 10:28:30 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.726 10:28:30 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.726 10:28:30 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.726 10:28:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:23.726 [2024-11-25 10:28:30.714814] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:23:23.726 [2024-11-25 10:28:30.715138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76669 ] 00:23:23.985 [2024-11-25 10:28:30.898319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.985 [2024-11-25 10:28:31.011190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.551 10:28:31 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.551 10:28:31 ftl -- common/autotest_common.sh@868 -- # return 0 00:23:24.551 10:28:31 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:23:24.811 10:28:31 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:25.749 10:28:32 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:25.749 10:28:32 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:23:26.317 10:28:33 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:23:26.317 10:28:33 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:26.317 10:28:33 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@50 -- # break 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@63 -- # break 00:23:26.576 10:28:33 ftl -- ftl/ftl.sh@66 -- # killprocess 76669 00:23:26.576 10:28:33 ftl -- common/autotest_common.sh@954 -- # '[' -z 76669 ']' 00:23:26.576 10:28:33 ftl -- common/autotest_common.sh@958 -- # kill -0 76669 00:23:26.576 10:28:33 ftl -- common/autotest_common.sh@959 -- # uname 00:23:26.836 10:28:33 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.836 10:28:33 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76669 00:23:26.836 10:28:33 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.836 10:28:33 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.836 killing process with pid 76669 00:23:26.836 10:28:33 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76669' 00:23:26.836 10:28:33 ftl -- common/autotest_common.sh@973 -- # kill 76669 00:23:26.836 10:28:33 ftl -- common/autotest_common.sh@978 -- # wait 76669 00:23:29.400 10:28:36 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:23:29.400 10:28:36 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:29.400 10:28:36 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:29.400 10:28:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.400 10:28:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:29.400 ************************************ 00:23:29.400 START TEST ftl_fio_basic 00:23:29.400 ************************************ 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:29.400 * Looking for test storage... 00:23:29.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:29.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.400 --rc genhtml_branch_coverage=1 00:23:29.400 --rc genhtml_function_coverage=1 00:23:29.400 --rc genhtml_legend=1 00:23:29.400 --rc geninfo_all_blocks=1 00:23:29.400 --rc geninfo_unexecuted_blocks=1 00:23:29.400 00:23:29.400 ' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:29.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.400 --rc genhtml_branch_coverage=1 00:23:29.400 --rc genhtml_function_coverage=1 00:23:29.400 --rc genhtml_legend=1 00:23:29.400 --rc geninfo_all_blocks=1 00:23:29.400 --rc geninfo_unexecuted_blocks=1 00:23:29.400 00:23:29.400 ' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:29.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.400 --rc genhtml_branch_coverage=1 00:23:29.400 --rc genhtml_function_coverage=1 00:23:29.400 --rc genhtml_legend=1 00:23:29.400 --rc geninfo_all_blocks=1 00:23:29.400 --rc geninfo_unexecuted_blocks=1 00:23:29.400 00:23:29.400 ' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:29.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.400 --rc genhtml_branch_coverage=1 00:23:29.400 --rc genhtml_function_coverage=1 00:23:29.400 --rc genhtml_legend=1 00:23:29.400 --rc geninfo_all_blocks=1 00:23:29.400 --rc geninfo_unexecuted_blocks=1 00:23:29.400 00:23:29.400 ' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:29.400 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76818 00:23:29.401 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76818 00:23:29.401 10:28:36 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:23:29.401 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76818 ']' 00:23:29.401 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.401 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.401 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.401 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.401 10:28:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:29.401 [2024-11-25 10:28:36.462824] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:23:29.401 [2024-11-25 10:28:36.463158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76818 ] 00:23:29.659 [2024-11-25 10:28:36.647950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:29.919 [2024-11-25 10:28:36.773386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.919 [2024-11-25 10:28:36.773474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.919 [2024-11-25 10:28:36.773536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.857 10:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.857 10:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:23:30.857 10:28:37 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:30.857 10:28:37 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:23:30.857 10:28:37 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:30.857 10:28:37 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:23:30.857 10:28:37 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:23:30.857 10:28:37 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:31.115 10:28:37 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:31.115 10:28:37 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:23:31.115 10:28:37 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:31.115 10:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:31.115 10:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:31.115 10:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:31.115 10:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:31.115 10:28:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:31.115 10:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:31.115 { 00:23:31.115 "name": "nvme0n1", 00:23:31.115 "aliases": [ 00:23:31.115 "92aee8e6-bc53-4d8b-b058-f75d41625256" 00:23:31.115 ], 00:23:31.115 "product_name": "NVMe disk", 00:23:31.115 "block_size": 4096, 00:23:31.115 "num_blocks": 1310720, 00:23:31.115 "uuid": "92aee8e6-bc53-4d8b-b058-f75d41625256", 00:23:31.115 "numa_id": -1, 00:23:31.115 "assigned_rate_limits": { 00:23:31.115 "rw_ios_per_sec": 0, 00:23:31.115 "rw_mbytes_per_sec": 0, 00:23:31.115 "r_mbytes_per_sec": 0, 00:23:31.115 "w_mbytes_per_sec": 0 00:23:31.115 }, 00:23:31.115 "claimed": false, 00:23:31.115 "zoned": false, 00:23:31.115 "supported_io_types": { 00:23:31.115 "read": true, 00:23:31.115 "write": true, 00:23:31.115 "unmap": true, 00:23:31.115 "flush": true, 00:23:31.115 "reset": true, 00:23:31.115 "nvme_admin": true, 00:23:31.115 "nvme_io": true, 00:23:31.115 "nvme_io_md": false, 00:23:31.115 "write_zeroes": true, 00:23:31.115 "zcopy": false, 00:23:31.115 "get_zone_info": false, 00:23:31.115 "zone_management": false, 00:23:31.115 "zone_append": false, 00:23:31.115 "compare": true, 00:23:31.115 "compare_and_write": false, 00:23:31.115 "abort": true, 00:23:31.115 "seek_hole": false, 00:23:31.115 "seek_data": false, 00:23:31.115 "copy": true, 00:23:31.115 "nvme_iov_md": false 00:23:31.115 }, 00:23:31.115 "driver_specific": { 00:23:31.115 "nvme": [ 00:23:31.115 { 00:23:31.115 "pci_address": "0000:00:11.0", 00:23:31.115 "trid": { 00:23:31.115 "trtype": "PCIe", 00:23:31.115 "traddr": "0000:00:11.0" 00:23:31.115 }, 00:23:31.115 "ctrlr_data": { 00:23:31.115 "cntlid": 0, 00:23:31.115 "vendor_id": "0x1b36", 00:23:31.115 "model_number": "QEMU NVMe Ctrl", 00:23:31.115 "serial_number": "12341", 00:23:31.115 "firmware_revision": "8.0.0", 00:23:31.115 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:31.115 "oacs": { 00:23:31.115 "security": 0, 00:23:31.115 "format": 1, 00:23:31.115 "firmware": 0, 00:23:31.115 "ns_manage": 1 00:23:31.115 }, 00:23:31.115 "multi_ctrlr": false, 00:23:31.115 "ana_reporting": false 00:23:31.115 }, 00:23:31.115 "vs": { 00:23:31.115 "nvme_version": "1.4" 00:23:31.115 }, 00:23:31.115 "ns_data": { 00:23:31.115 "id": 1, 00:23:31.115 "can_share": false 00:23:31.115 } 00:23:31.115 } 00:23:31.115 ], 00:23:31.115 "mp_policy": "active_passive" 00:23:31.115 } 00:23:31.115 } 00:23:31.115 ]' 00:23:31.115 10:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:31.374 10:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:31.374 10:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:31.374 10:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:31.374 10:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:31.374 10:28:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:23:31.374 10:28:38 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:31.374 10:28:38 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:31.374 10:28:38 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:31.374 10:28:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:31.374 10:28:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:31.633 10:28:38 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:31.633 10:28:38 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:31.633 10:28:38 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=999d0273-e7c7-4ec1-accd-76239cf6d4f9 00:23:31.633 10:28:38 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 999d0273-e7c7-4ec1-accd-76239cf6d4f9 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:32.244 { 00:23:32.244 "name": "a9ebdde5-a85a-48ea-822e-0e1b1820d74d", 00:23:32.244 "aliases": [ 00:23:32.244 "lvs/nvme0n1p0" 00:23:32.244 ], 00:23:32.244 "product_name": "Logical Volume", 00:23:32.244 "block_size": 4096, 00:23:32.244 "num_blocks": 26476544, 00:23:32.244 "uuid": "a9ebdde5-a85a-48ea-822e-0e1b1820d74d", 00:23:32.244 "assigned_rate_limits": { 00:23:32.244 "rw_ios_per_sec": 0, 00:23:32.244 "rw_mbytes_per_sec": 0, 00:23:32.244 "r_mbytes_per_sec": 0, 00:23:32.244 "w_mbytes_per_sec": 0 00:23:32.244 }, 00:23:32.244 "claimed": false, 00:23:32.244 "zoned": false, 00:23:32.244 "supported_io_types": { 00:23:32.244 "read": true, 00:23:32.244 "write": true, 00:23:32.244 "unmap": true, 00:23:32.244 "flush": false, 00:23:32.244 "reset": true, 00:23:32.244 "nvme_admin": false, 00:23:32.244 "nvme_io": false, 00:23:32.244 "nvme_io_md": false, 00:23:32.244 "write_zeroes": true, 00:23:32.244 "zcopy": false, 00:23:32.244 "get_zone_info": false, 00:23:32.244 "zone_management": false, 00:23:32.244 "zone_append": false, 00:23:32.244 "compare": false, 00:23:32.244 "compare_and_write": false, 00:23:32.244 "abort": false, 00:23:32.244 "seek_hole": true, 00:23:32.244 "seek_data": true, 00:23:32.244 "copy": false, 00:23:32.244 "nvme_iov_md": false 00:23:32.244 }, 00:23:32.244 "driver_specific": { 00:23:32.244 "lvol": { 00:23:32.244 "lvol_store_uuid": "999d0273-e7c7-4ec1-accd-76239cf6d4f9", 00:23:32.244 "base_bdev": "nvme0n1", 00:23:32.244 "thin_provision": true, 00:23:32.244 "num_allocated_clusters": 0, 00:23:32.244 "snapshot": false, 00:23:32.244 "clone": false, 00:23:32.244 "esnap_clone": false 00:23:32.244 } 00:23:32.244 } 00:23:32.244 } 00:23:32.244 ]' 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:32.244 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:32.502 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:32.502 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:32.502 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:32.502 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:32.502 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:32.502 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:32.762 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:32.762 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:32.762 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:32.762 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:32.762 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:32.762 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:32.762 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:32.762 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:32.762 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:32.762 { 00:23:32.762 "name": "a9ebdde5-a85a-48ea-822e-0e1b1820d74d", 00:23:32.762 "aliases": [ 00:23:32.762 "lvs/nvme0n1p0" 00:23:32.762 ], 00:23:32.762 "product_name": "Logical Volume", 00:23:32.762 "block_size": 4096, 00:23:32.762 "num_blocks": 26476544, 00:23:32.762 "uuid": "a9ebdde5-a85a-48ea-822e-0e1b1820d74d", 00:23:32.762 "assigned_rate_limits": { 00:23:32.762 "rw_ios_per_sec": 0, 00:23:32.762 "rw_mbytes_per_sec": 0, 00:23:32.762 "r_mbytes_per_sec": 0, 00:23:32.762 "w_mbytes_per_sec": 0 00:23:32.762 }, 00:23:32.762 "claimed": false, 00:23:32.762 "zoned": false, 00:23:32.762 "supported_io_types": { 00:23:32.762 "read": true, 00:23:32.762 "write": true, 00:23:32.762 "unmap": true, 00:23:32.762 "flush": false, 00:23:32.762 "reset": true, 00:23:32.762 "nvme_admin": false, 00:23:32.762 "nvme_io": false, 00:23:32.762 "nvme_io_md": false, 00:23:32.762 "write_zeroes": true, 00:23:32.762 "zcopy": false, 00:23:32.762 "get_zone_info": false, 00:23:32.762 "zone_management": false, 00:23:32.762 "zone_append": false, 00:23:32.762 "compare": false, 00:23:32.762 "compare_and_write": false, 00:23:32.762 "abort": false, 00:23:32.762 "seek_hole": true, 00:23:32.762 "seek_data": true, 00:23:32.762 "copy": false, 00:23:32.762 "nvme_iov_md": false 00:23:32.762 }, 00:23:32.762 "driver_specific": { 00:23:32.762 "lvol": { 00:23:32.762 "lvol_store_uuid": "999d0273-e7c7-4ec1-accd-76239cf6d4f9", 00:23:32.762 "base_bdev": "nvme0n1", 00:23:32.762 "thin_provision": true, 00:23:32.762 "num_allocated_clusters": 0, 00:23:32.762 "snapshot": false, 00:23:32.762 "clone": false, 00:23:32.762 "esnap_clone": false 00:23:32.762 } 00:23:32.762 } 00:23:32.762 } 00:23:32.762 ]' 00:23:32.762 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:33.021 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:33.021 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:33.021 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:33.021 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:33.021 10:28:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:33.021 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:33.021 10:28:39 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:33.021 10:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:33.021 10:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:33.021 10:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:33.021 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:33.021 10:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:33.021 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:33.021 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:33.280 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:33.280 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:33.281 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a9ebdde5-a85a-48ea-822e-0e1b1820d74d 00:23:33.281 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:33.281 { 00:23:33.281 "name": "a9ebdde5-a85a-48ea-822e-0e1b1820d74d", 00:23:33.281 "aliases": [ 00:23:33.281 "lvs/nvme0n1p0" 00:23:33.281 ], 00:23:33.281 "product_name": "Logical Volume", 00:23:33.281 "block_size": 4096, 00:23:33.281 "num_blocks": 26476544, 00:23:33.281 "uuid": "a9ebdde5-a85a-48ea-822e-0e1b1820d74d", 00:23:33.281 "assigned_rate_limits": { 00:23:33.281 "rw_ios_per_sec": 0, 00:23:33.281 "rw_mbytes_per_sec": 0, 00:23:33.281 "r_mbytes_per_sec": 0, 00:23:33.281 "w_mbytes_per_sec": 0 00:23:33.281 }, 00:23:33.281 "claimed": false, 00:23:33.281 "zoned": false, 00:23:33.281 "supported_io_types": { 00:23:33.281 "read": true, 00:23:33.281 "write": true, 00:23:33.281 "unmap": true, 00:23:33.281 "flush": false, 00:23:33.281 "reset": true, 00:23:33.281 "nvme_admin": false, 00:23:33.281 "nvme_io": false, 00:23:33.281 "nvme_io_md": false, 00:23:33.281 "write_zeroes": true, 00:23:33.281 "zcopy": false, 00:23:33.281 "get_zone_info": false, 00:23:33.281 "zone_management": false, 00:23:33.281 "zone_append": false, 00:23:33.281 "compare": false, 00:23:33.281 "compare_and_write": false, 00:23:33.281 "abort": false, 00:23:33.281 "seek_hole": true, 00:23:33.281 "seek_data": true, 00:23:33.281 "copy": false, 00:23:33.281 "nvme_iov_md": false 00:23:33.281 }, 00:23:33.281 "driver_specific": { 00:23:33.281 "lvol": { 00:23:33.281 "lvol_store_uuid": "999d0273-e7c7-4ec1-accd-76239cf6d4f9", 00:23:33.281 "base_bdev": "nvme0n1", 00:23:33.281 "thin_provision": true, 00:23:33.281 "num_allocated_clusters": 0, 00:23:33.281 "snapshot": false, 00:23:33.281 "clone": false, 00:23:33.281 "esnap_clone": false 00:23:33.281 } 00:23:33.281 } 00:23:33.281 } 00:23:33.281 ]' 00:23:33.281 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:33.541 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:33.541 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:33.541 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:33.541 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:33.541 10:28:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:33.541 10:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:33.541 10:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:33.541 10:28:40 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a9ebdde5-a85a-48ea-822e-0e1b1820d74d -c nvc0n1p0 --l2p_dram_limit 60 00:23:33.541 [2024-11-25 10:28:40.627424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.541 [2024-11-25 10:28:40.627655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:33.541 [2024-11-25 10:28:40.627688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:33.541 [2024-11-25 10:28:40.627700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.541 [2024-11-25 10:28:40.627795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.541 [2024-11-25 10:28:40.627808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.541 [2024-11-25 10:28:40.627821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:33.541 [2024-11-25 10:28:40.627831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.541 [2024-11-25 10:28:40.627874] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:33.541 [2024-11-25 10:28:40.628949] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:33.541 [2024-11-25 10:28:40.628983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.541 [2024-11-25 10:28:40.628995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.541 [2024-11-25 10:28:40.629008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:23:33.541 [2024-11-25 10:28:40.629018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.541 [2024-11-25 10:28:40.629108] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aba092dc-fbb9-49db-9086-2569456a55e1 00:23:33.541 [2024-11-25 10:28:40.630557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.541 [2024-11-25 10:28:40.630588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:33.541 [2024-11-25 10:28:40.630601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:33.541 [2024-11-25 10:28:40.630614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.541 [2024-11-25 10:28:40.638086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.541 [2024-11-25 10:28:40.638243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.541 [2024-11-25 10:28:40.638264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.425 ms 00:23:33.541 [2024-11-25 10:28:40.638283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.541 [2024-11-25 10:28:40.638400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.541 [2024-11-25 10:28:40.638416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.541 [2024-11-25 10:28:40.638427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:33.541 [2024-11-25 10:28:40.638445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.541 [2024-11-25 10:28:40.638532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.541 [2024-11-25 10:28:40.638548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:33.542 [2024-11-25 10:28:40.638559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:33.542 [2024-11-25 10:28:40.638572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.542 [2024-11-25 10:28:40.638608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:33.542 [2024-11-25 10:28:40.643260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.542 [2024-11-25 10:28:40.643288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.542 [2024-11-25 10:28:40.643308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.667 ms 00:23:33.542 [2024-11-25 10:28:40.643319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.542 [2024-11-25 10:28:40.643368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.542 [2024-11-25 10:28:40.643379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:33.542 [2024-11-25 10:28:40.643392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:33.542 [2024-11-25 10:28:40.643402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.542 [2024-11-25 10:28:40.643446] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:33.542 [2024-11-25 10:28:40.643600] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:33.542 [2024-11-25 10:28:40.643625] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:33.542 [2024-11-25 10:28:40.643639] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:33.542 [2024-11-25 10:28:40.643654] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:33.542 [2024-11-25 10:28:40.643666] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:33.542 [2024-11-25 10:28:40.643679] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:33.542 [2024-11-25 10:28:40.643690] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:33.542 [2024-11-25 10:28:40.643702] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:33.542 [2024-11-25 10:28:40.643712] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:33.542 [2024-11-25 10:28:40.643728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.542 [2024-11-25 10:28:40.643738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:33.542 [2024-11-25 10:28:40.643753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:23:33.542 [2024-11-25 10:28:40.643763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.542 [2024-11-25 10:28:40.643847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.542 [2024-11-25 10:28:40.643857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:33.542 [2024-11-25 10:28:40.643870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:33.542 [2024-11-25 10:28:40.643880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.542 [2024-11-25 10:28:40.643986] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:33.542 [2024-11-25 10:28:40.644000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:33.542 [2024-11-25 10:28:40.644014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:33.542 [2024-11-25 10:28:40.644024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:33.542 [2024-11-25 10:28:40.644046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:33.542 [2024-11-25 10:28:40.644068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:33.542 [2024-11-25 10:28:40.644079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:33.542 [2024-11-25 10:28:40.644100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:33.542 [2024-11-25 10:28:40.644109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:33.542 [2024-11-25 10:28:40.644120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:33.542 [2024-11-25 10:28:40.644129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:33.542 [2024-11-25 10:28:40.644141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:33.542 [2024-11-25 10:28:40.644150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:33.542 [2024-11-25 10:28:40.644179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:33.542 [2024-11-25 10:28:40.644190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:33.542 [2024-11-25 10:28:40.644211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.542 [2024-11-25 10:28:40.644232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:33.542 [2024-11-25 10:28:40.644241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.542 [2024-11-25 10:28:40.644262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:33.542 [2024-11-25 10:28:40.644273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.542 [2024-11-25 10:28:40.644294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:33.542 [2024-11-25 10:28:40.644303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.542 [2024-11-25 10:28:40.644324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:33.542 [2024-11-25 10:28:40.644338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:33.542 [2024-11-25 10:28:40.644373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:33.542 [2024-11-25 10:28:40.644382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:33.542 [2024-11-25 10:28:40.644393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:33.542 [2024-11-25 10:28:40.644405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:33.542 [2024-11-25 10:28:40.644417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:33.542 [2024-11-25 10:28:40.644426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:33.542 [2024-11-25 10:28:40.644447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:33.542 [2024-11-25 10:28:40.644460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644469] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:33.542 [2024-11-25 10:28:40.644482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:33.542 [2024-11-25 10:28:40.644502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:33.542 [2024-11-25 10:28:40.644515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.542 [2024-11-25 10:28:40.644526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:33.542 [2024-11-25 10:28:40.644540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:33.542 [2024-11-25 10:28:40.644549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:33.542 [2024-11-25 10:28:40.644561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:33.542 [2024-11-25 10:28:40.644570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:33.542 [2024-11-25 10:28:40.644582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:33.542 [2024-11-25 10:28:40.644600] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:33.542 [2024-11-25 10:28:40.644616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:33.542 [2024-11-25 10:28:40.644627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:33.542 [2024-11-25 10:28:40.644640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:33.542 [2024-11-25 10:28:40.644650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:33.542 [2024-11-25 10:28:40.644663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:33.542 [2024-11-25 10:28:40.644673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:33.542 [2024-11-25 10:28:40.644685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:33.542 [2024-11-25 10:28:40.644696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:33.542 [2024-11-25 10:28:40.644708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:33.542 [2024-11-25 10:28:40.644718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:33.542 [2024-11-25 10:28:40.644733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:33.542 [2024-11-25 10:28:40.644743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:33.542 [2024-11-25 10:28:40.644757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:33.542 [2024-11-25 10:28:40.644767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:33.542 [2024-11-25 10:28:40.644780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:33.542 [2024-11-25 10:28:40.644792] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:33.542 [2024-11-25 10:28:40.644809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:33.543 [2024-11-25 10:28:40.644824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:33.543 [2024-11-25 10:28:40.644837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:33.543 [2024-11-25 10:28:40.644847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:33.543 [2024-11-25 10:28:40.644859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:33.543 [2024-11-25 10:28:40.644870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.543 [2024-11-25 10:28:40.644882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:33.543 [2024-11-25 10:28:40.644892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:23:33.543 [2024-11-25 10:28:40.644905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.543 [2024-11-25 10:28:40.644971] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:33.543 [2024-11-25 10:28:40.644989] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:37.770 [2024-11-25 10:28:44.799059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.770 [2024-11-25 10:28:44.799123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:37.770 [2024-11-25 10:28:44.799140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4160.829 ms 00:23:37.770 [2024-11-25 10:28:44.799153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.770 [2024-11-25 10:28:44.837970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.770 [2024-11-25 10:28:44.838366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:37.770 [2024-11-25 10:28:44.838397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.595 ms 00:23:37.770 [2024-11-25 10:28:44.838411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.770 [2024-11-25 10:28:44.838595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.770 [2024-11-25 10:28:44.838612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:37.770 [2024-11-25 10:28:44.838623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:37.770 [2024-11-25 10:28:44.838639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:44.890211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:44.890665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.029 [2024-11-25 10:28:44.890743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.601 ms 00:23:38.029 [2024-11-25 10:28:44.890799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:44.890896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:44.890951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.029 [2024-11-25 10:28:44.891004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:38.029 [2024-11-25 10:28:44.891062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:44.891644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:44.891741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.029 [2024-11-25 10:28:44.891814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:23:38.029 [2024-11-25 10:28:44.891891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:44.892105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:44.892247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.029 [2024-11-25 10:28:44.892328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:23:38.029 [2024-11-25 10:28:44.892408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:44.915897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:44.916088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.029 [2024-11-25 10:28:44.916152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.248 ms 00:23:38.029 [2024-11-25 10:28:44.916215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:44.930922] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:38.029 [2024-11-25 10:28:44.948251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:44.948463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:38.029 [2024-11-25 10:28:44.948584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.897 ms 00:23:38.029 [2024-11-25 10:28:44.948647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:45.035577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:45.036025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:38.029 [2024-11-25 10:28:45.036151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.964 ms 00:23:38.029 [2024-11-25 10:28:45.036208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:45.036488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:45.036695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:38.029 [2024-11-25 10:28:45.036824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:23:38.029 [2024-11-25 10:28:45.036880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:45.077447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:45.077718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:38.029 [2024-11-25 10:28:45.077794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.312 ms 00:23:38.029 [2024-11-25 10:28:45.077849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:45.115793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:45.116208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:38.029 [2024-11-25 10:28:45.116330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.707 ms 00:23:38.029 [2024-11-25 10:28:45.116388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.029 [2024-11-25 10:28:45.117274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.029 [2024-11-25 10:28:45.117464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:38.029 [2024-11-25 10:28:45.117568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:23:38.029 [2024-11-25 10:28:45.117626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.287 [2024-11-25 10:28:45.217290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.287 [2024-11-25 10:28:45.217614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:38.287 [2024-11-25 10:28:45.217735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.695 ms 00:23:38.287 [2024-11-25 10:28:45.217795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.287 [2024-11-25 10:28:45.256000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.287 [2024-11-25 10:28:45.256197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:38.287 [2024-11-25 10:28:45.256387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.115 ms 00:23:38.287 [2024-11-25 10:28:45.256570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.287 [2024-11-25 10:28:45.293789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.287 [2024-11-25 10:28:45.294019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:38.287 [2024-11-25 10:28:45.294239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.145 ms 00:23:38.287 [2024-11-25 10:28:45.294326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.287 [2024-11-25 10:28:45.330707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.287 [2024-11-25 10:28:45.330864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:38.287 [2024-11-25 10:28:45.330932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.273 ms 00:23:38.287 [2024-11-25 10:28:45.330985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.287 [2024-11-25 10:28:45.331183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.287 [2024-11-25 10:28:45.331264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:38.287 [2024-11-25 10:28:45.331336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:38.287 [2024-11-25 10:28:45.331388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.287 [2024-11-25 10:28:45.331566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.287 [2024-11-25 10:28:45.331724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:38.287 [2024-11-25 10:28:45.331839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:38.287 [2024-11-25 10:28:45.331992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.287 [2024-11-25 10:28:45.333210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4712.935 ms, result 0 00:23:38.287 { 00:23:38.287 "name": "ftl0", 00:23:38.287 "uuid": "aba092dc-fbb9-49db-9086-2569456a55e1" 00:23:38.287 } 00:23:38.287 10:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:38.287 10:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:38.287 10:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:38.287 10:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:23:38.287 10:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:38.287 10:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:38.287 10:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:38.546 10:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:38.805 [ 00:23:38.805 { 00:23:38.805 "name": "ftl0", 00:23:38.805 "aliases": [ 00:23:38.805 "aba092dc-fbb9-49db-9086-2569456a55e1" 00:23:38.805 ], 00:23:38.805 "product_name": "FTL disk", 00:23:38.805 "block_size": 4096, 00:23:38.805 "num_blocks": 20971520, 00:23:38.805 "uuid": "aba092dc-fbb9-49db-9086-2569456a55e1", 00:23:38.805 "assigned_rate_limits": { 00:23:38.805 "rw_ios_per_sec": 0, 00:23:38.805 "rw_mbytes_per_sec": 0, 00:23:38.805 "r_mbytes_per_sec": 0, 00:23:38.805 "w_mbytes_per_sec": 0 00:23:38.805 }, 00:23:38.805 "claimed": false, 00:23:38.805 "zoned": false, 00:23:38.805 "supported_io_types": { 00:23:38.805 "read": true, 00:23:38.805 "write": true, 00:23:38.805 "unmap": true, 00:23:38.805 "flush": true, 00:23:38.805 "reset": false, 00:23:38.805 "nvme_admin": false, 00:23:38.805 "nvme_io": false, 00:23:38.805 "nvme_io_md": false, 00:23:38.805 "write_zeroes": true, 00:23:38.805 "zcopy": false, 00:23:38.805 "get_zone_info": false, 00:23:38.805 "zone_management": false, 00:23:38.805 "zone_append": false, 00:23:38.805 "compare": false, 00:23:38.805 "compare_and_write": false, 00:23:38.805 "abort": false, 00:23:38.805 "seek_hole": false, 00:23:38.805 "seek_data": false, 00:23:38.805 "copy": false, 00:23:38.805 "nvme_iov_md": false 00:23:38.805 }, 00:23:38.805 "driver_specific": { 00:23:38.805 "ftl": { 00:23:38.805 "base_bdev": "a9ebdde5-a85a-48ea-822e-0e1b1820d74d", 00:23:38.805 "cache": "nvc0n1p0" 00:23:38.805 } 00:23:38.805 } 00:23:38.805 } 00:23:38.805 ] 00:23:38.805 10:28:45 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:23:38.805 10:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:23:38.805 10:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:39.063 10:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:23:39.063 10:28:45 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:39.063 [2024-11-25 10:28:46.171673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.063 [2024-11-25 10:28:46.172053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:39.063 [2024-11-25 10:28:46.172134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:39.063 [2024-11-25 10:28:46.172190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.063 [2024-11-25 10:28:46.172281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:39.323 [2024-11-25 10:28:46.176632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.176725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:39.323 [2024-11-25 10:28:46.176797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.089 ms 00:23:39.323 [2024-11-25 10:28:46.176847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.177340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.177530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:39.323 [2024-11-25 10:28:46.177653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:23:39.323 [2024-11-25 10:28:46.177742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.180374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.180557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:39.323 [2024-11-25 10:28:46.180716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.531 ms 00:23:39.323 [2024-11-25 10:28:46.180803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.185922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.186110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:39.323 [2024-11-25 10:28:46.186169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.031 ms 00:23:39.323 [2024-11-25 10:28:46.186214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.222644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.222833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:39.323 [2024-11-25 10:28:46.222943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.353 ms 00:23:39.323 [2024-11-25 10:28:46.223001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.244429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.244644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:39.323 [2024-11-25 10:28:46.244735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.376 ms 00:23:39.323 [2024-11-25 10:28:46.244789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.245030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.245188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:39.323 [2024-11-25 10:28:46.245306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:23:39.323 [2024-11-25 10:28:46.245395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.281611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.281711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:39.323 [2024-11-25 10:28:46.281771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.081 ms 00:23:39.323 [2024-11-25 10:28:46.281922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.317351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.317574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:39.323 [2024-11-25 10:28:46.317675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.375 ms 00:23:39.323 [2024-11-25 10:28:46.317729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.352916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.353115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:39.323 [2024-11-25 10:28:46.353205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.149 ms 00:23:39.323 [2024-11-25 10:28:46.353280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.388803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.323 [2024-11-25 10:28:46.389000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:39.323 [2024-11-25 10:28:46.389091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.413 ms 00:23:39.323 [2024-11-25 10:28:46.389147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.323 [2024-11-25 10:28:46.389259] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:39.323 [2024-11-25 10:28:46.389412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.389503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.389622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.389700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.389752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.389800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.389920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.389997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.390996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:39.323 [2024-11-25 10:28:46.391880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.391934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.391978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.392992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:39.324 [2024-11-25 10:28:46.393314] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:39.324 [2024-11-25 10:28:46.393327] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aba092dc-fbb9-49db-9086-2569456a55e1 00:23:39.324 [2024-11-25 10:28:46.393338] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:39.324 [2024-11-25 10:28:46.393353] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:39.324 [2024-11-25 10:28:46.393366] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:39.324 [2024-11-25 10:28:46.393379] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:39.324 [2024-11-25 10:28:46.393390] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:39.324 [2024-11-25 10:28:46.393403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:39.324 [2024-11-25 10:28:46.393413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:39.324 [2024-11-25 10:28:46.393425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:39.324 [2024-11-25 10:28:46.393434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:39.324 [2024-11-25 10:28:46.393447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.324 [2024-11-25 10:28:46.393457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:39.324 [2024-11-25 10:28:46.393471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.214 ms 00:23:39.324 [2024-11-25 10:28:46.393481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.324 [2024-11-25 10:28:46.415415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.324 [2024-11-25 10:28:46.415608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:39.324 [2024-11-25 10:28:46.415704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.880 ms 00:23:39.324 [2024-11-25 10:28:46.415755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.324 [2024-11-25 10:28:46.416344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.324 [2024-11-25 10:28:46.416502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:39.324 [2024-11-25 10:28:46.416609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:23:39.324 [2024-11-25 10:28:46.416696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.583 [2024-11-25 10:28:46.486572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.583 [2024-11-25 10:28:46.486891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:39.583 [2024-11-25 10:28:46.487045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.583 [2024-11-25 10:28:46.487265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.583 [2024-11-25 10:28:46.487432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.583 [2024-11-25 10:28:46.487489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:39.583 [2024-11-25 10:28:46.487651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.583 [2024-11-25 10:28:46.487722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.583 [2024-11-25 10:28:46.487914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.583 [2024-11-25 10:28:46.488074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:39.583 [2024-11-25 10:28:46.488241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.583 [2024-11-25 10:28:46.488397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.583 [2024-11-25 10:28:46.488526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.583 [2024-11-25 10:28:46.488577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:39.583 [2024-11-25 10:28:46.488623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.583 [2024-11-25 10:28:46.488749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.583 [2024-11-25 10:28:46.627701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.583 [2024-11-25 10:28:46.627984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:39.583 [2024-11-25 10:28:46.628173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.583 [2024-11-25 10:28:46.628313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.842 [2024-11-25 10:28:46.728898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.842 [2024-11-25 10:28:46.729250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:39.842 [2024-11-25 10:28:46.729406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.842 [2024-11-25 10:28:46.729488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.842 [2024-11-25 10:28:46.729704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.842 [2024-11-25 10:28:46.729778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:39.842 [2024-11-25 10:28:46.729873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.842 [2024-11-25 10:28:46.729930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.842 [2024-11-25 10:28:46.730149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.842 [2024-11-25 10:28:46.730227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:39.842 [2024-11-25 10:28:46.730285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.842 [2024-11-25 10:28:46.730330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.842 [2024-11-25 10:28:46.730511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.842 [2024-11-25 10:28:46.730670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:39.842 [2024-11-25 10:28:46.730773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.842 [2024-11-25 10:28:46.730849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.842 [2024-11-25 10:28:46.731010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.842 [2024-11-25 10:28:46.731065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:39.842 [2024-11-25 10:28:46.731121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.842 [2024-11-25 10:28:46.731305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.842 [2024-11-25 10:28:46.731429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.842 [2024-11-25 10:28:46.731474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:39.842 [2024-11-25 10:28:46.731548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.842 [2024-11-25 10:28:46.731594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.842 [2024-11-25 10:28:46.731697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:39.842 [2024-11-25 10:28:46.731837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:39.842 [2024-11-25 10:28:46.731916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:39.842 [2024-11-25 10:28:46.731983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.842 [2024-11-25 10:28:46.732211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 561.410 ms, result 0 00:23:39.842 true 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76818 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76818 ']' 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76818 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76818 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.842 killing process with pid 76818 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76818' 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76818 00:23:39.842 10:28:46 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76818 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:45.123 10:28:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:45.123 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:45.123 fio-3.35 00:23:45.123 Starting 1 thread 00:23:50.395 00:23:50.395 test: (groupid=0, jobs=1): err= 0: pid=77041: Mon Nov 25 10:28:57 2024 00:23:50.395 read: IOPS=946, BW=62.8MiB/s (65.9MB/s)(255MiB/4051msec) 00:23:50.395 slat (nsec): min=4144, max=24677, avg=5928.29, stdev=2294.78 00:23:50.395 clat (usec): min=294, max=791, avg=472.75, stdev=56.52 00:23:50.395 lat (usec): min=302, max=804, avg=478.68, stdev=56.78 00:23:50.395 clat percentiles (usec): 00:23:50.395 | 1.00th=[ 330], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 437], 00:23:50.395 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 469], 60.00th=[ 502], 00:23:50.395 | 70.00th=[ 510], 80.00th=[ 519], 90.00th=[ 529], 95.00th=[ 562], 00:23:50.395 | 99.00th=[ 594], 99.50th=[ 619], 99.90th=[ 652], 99.95th=[ 758], 00:23:50.395 | 99.99th=[ 791] 00:23:50.395 write: IOPS=953, BW=63.3MiB/s (66.4MB/s)(256MiB/4046msec); 0 zone resets 00:23:50.395 slat (nsec): min=15441, max=96285, avg=22575.01, stdev=6543.80 00:23:50.395 clat (usec): min=313, max=1053, avg=538.83, stdev=74.47 00:23:50.395 lat (usec): min=329, max=1082, avg=561.41, stdev=75.20 00:23:50.395 clat percentiles (usec): 00:23:50.395 | 1.00th=[ 392], 5.00th=[ 420], 10.00th=[ 457], 20.00th=[ 474], 00:23:50.395 | 30.00th=[ 510], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:23:50.395 | 70.00th=[ 578], 80.00th=[ 594], 90.00th=[ 611], 95.00th=[ 627], 00:23:50.395 | 99.00th=[ 840], 99.50th=[ 906], 99.90th=[ 988], 99.95th=[ 1020], 00:23:50.395 | 99.99th=[ 1057] 00:23:50.395 bw ( KiB/s): min=60656, max=67728, per=100.00%, avg=64804.00, stdev=2066.38, samples=8 00:23:50.395 iops : min= 892, max= 996, avg=953.00, stdev=30.39, samples=8 00:23:50.395 lat (usec) : 500=43.79%, 750=55.38%, 1000=0.81% 00:23:50.395 lat (msec) : 2=0.03% 00:23:50.396 cpu : usr=99.36%, sys=0.02%, ctx=9, majf=0, minf=1169 00:23:50.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:50.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.396 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:50.396 00:23:50.396 Run status group 0 (all jobs): 00:23:50.396 READ: bw=62.8MiB/s (65.9MB/s), 62.8MiB/s-62.8MiB/s (65.9MB/s-65.9MB/s), io=255MiB (267MB), run=4051-4051msec 00:23:50.396 WRITE: bw=63.3MiB/s (66.4MB/s), 63.3MiB/s-63.3MiB/s (66.4MB/s-66.4MB/s), io=256MiB (269MB), run=4046-4046msec 00:23:52.300 ----------------------------------------------------- 00:23:52.300 Suppressions used: 00:23:52.300 count bytes template 00:23:52.300 1 5 /usr/src/fio/parse.c 00:23:52.300 1 8 libtcmalloc_minimal.so 00:23:52.300 1 904 libcrypto.so 00:23:52.300 ----------------------------------------------------- 00:23:52.300 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:52.300 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:52.301 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:52.301 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:52.301 10:28:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:52.301 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:52.301 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:52.301 fio-3.35 00:23:52.301 Starting 2 threads 00:24:24.463 00:24:24.463 first_half: (groupid=0, jobs=1): err= 0: pid=77144: Mon Nov 25 10:29:26 2024 00:24:24.463 read: IOPS=2516, BW=9.83MiB/s (10.3MB/s)(255MiB/25934msec) 00:24:24.463 slat (nsec): min=3439, max=47891, avg=7826.04, stdev=3506.07 00:24:24.463 clat (usec): min=1026, max=268527, avg=38830.64, stdev=19753.69 00:24:24.463 lat (usec): min=1032, max=268531, avg=38838.46, stdev=19754.16 00:24:24.463 clat percentiles (msec): 00:24:24.463 | 1.00th=[ 13], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:24:24.463 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 37], 00:24:24.463 | 70.00th=[ 38], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 51], 00:24:24.463 | 99.00th=[ 155], 99.50th=[ 180], 99.90th=[ 205], 99.95th=[ 234], 00:24:24.463 | 99.99th=[ 262] 00:24:24.463 write: IOPS=3236, BW=12.6MiB/s (13.3MB/s)(256MiB/20251msec); 0 zone resets 00:24:24.463 slat (usec): min=4, max=706, avg= 8.44, stdev= 6.23 00:24:24.463 clat (usec): min=527, max=94872, avg=11860.67, stdev=19276.81 00:24:24.463 lat (usec): min=535, max=94878, avg=11869.11, stdev=19276.84 00:24:24.463 clat percentiles (usec): 00:24:24.463 | 1.00th=[ 930], 5.00th=[ 1254], 10.00th=[ 1516], 20.00th=[ 1827], 00:24:24.463 | 30.00th=[ 2409], 40.00th=[ 4752], 50.00th=[ 6325], 60.00th=[ 7570], 00:24:24.463 | 70.00th=[ 8979], 80.00th=[11469], 90.00th=[29492], 95.00th=[74974], 00:24:24.463 | 99.00th=[86508], 99.50th=[88605], 99.90th=[92799], 99.95th=[92799], 00:24:24.463 | 99.99th=[93848] 00:24:24.463 bw ( KiB/s): min= 2408, max=43520, per=84.44%, avg=20971.52, stdev=11133.77, samples=25 00:24:24.463 iops : min= 602, max=10880, avg=5242.88, stdev=2783.44, samples=25 00:24:24.463 lat (usec) : 750=0.11%, 1000=0.72% 00:24:24.463 lat (msec) : 2=11.45%, 4=6.37%, 10=18.68%, 20=9.03%, 50=47.84% 00:24:24.463 lat (msec) : 100=4.61%, 250=1.19%, 500=0.01% 00:24:24.463 cpu : usr=99.22%, sys=0.18%, ctx=40, majf=0, minf=5573 00:24:24.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:24.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.463 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:24.463 issued rwts: total=65262,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:24.463 second_half: (groupid=0, jobs=1): err= 0: pid=77145: Mon Nov 25 10:29:26 2024 00:24:24.463 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(255MiB/25718msec) 00:24:24.463 slat (nsec): min=3430, max=66566, avg=7115.84, stdev=3543.43 00:24:24.463 clat (usec): min=1002, max=274266, avg=39935.86, stdev=19876.71 00:24:24.463 lat (usec): min=1010, max=274270, avg=39942.97, stdev=19877.30 00:24:24.463 clat percentiles (msec): 00:24:24.463 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:24:24.463 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 38], 00:24:24.464 | 70.00th=[ 39], 80.00th=[ 39], 90.00th=[ 43], 95.00th=[ 58], 00:24:24.464 | 99.00th=[ 150], 99.50th=[ 174], 99.90th=[ 218], 99.95th=[ 222], 00:24:24.464 | 99.99th=[ 268] 00:24:24.464 write: IOPS=3104, BW=12.1MiB/s (12.7MB/s)(256MiB/21111msec); 0 zone resets 00:24:24.464 slat (usec): min=4, max=310, avg= 7.90, stdev= 4.32 00:24:24.464 clat (usec): min=444, max=94490, avg=10406.41, stdev=18390.87 00:24:24.464 lat (usec): min=453, max=94496, avg=10414.31, stdev=18390.82 00:24:24.464 clat percentiles (usec): 00:24:24.464 | 1.00th=[ 1004], 5.00th=[ 1401], 10.00th=[ 1647], 20.00th=[ 1958], 00:24:24.464 | 30.00th=[ 2507], 40.00th=[ 4228], 50.00th=[ 5473], 60.00th=[ 6521], 00:24:24.464 | 70.00th=[ 7373], 80.00th=[10028], 90.00th=[13304], 95.00th=[71828], 00:24:24.464 | 99.00th=[86508], 99.50th=[87557], 99.90th=[91751], 99.95th=[92799], 00:24:24.464 | 99.99th=[93848] 00:24:24.464 bw ( KiB/s): min= 80, max=47080, per=91.79%, avg=22795.13, stdev=16314.78, samples=23 00:24:24.464 iops : min= 20, max=11770, avg=5698.78, stdev=4078.70, samples=23 00:24:24.464 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.43% 00:24:24.464 lat (msec) : 2=10.23%, 4=8.65%, 10=20.99%, 20=6.34%, 50=46.91% 00:24:24.464 lat (msec) : 100=4.98%, 250=1.42%, 500=0.01% 00:24:24.464 cpu : usr=99.33%, sys=0.10%, ctx=39, majf=0, minf=5550 00:24:24.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:24.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.464 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:24.464 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:24.464 00:24:24.464 Run status group 0 (all jobs): 00:24:24.464 READ: bw=19.7MiB/s (20.6MB/s), 9.83MiB/s-9.92MiB/s (10.3MB/s-10.4MB/s), io=510MiB (535MB), run=25718-25934msec 00:24:24.464 WRITE: bw=24.3MiB/s (25.4MB/s), 12.1MiB/s-12.6MiB/s (12.7MB/s-13.3MB/s), io=512MiB (537MB), run=20251-21111msec 00:24:24.464 ----------------------------------------------------- 00:24:24.464 Suppressions used: 00:24:24.464 count bytes template 00:24:24.464 2 10 /usr/src/fio/parse.c 00:24:24.464 2 192 /usr/src/fio/iolog.c 00:24:24.464 1 8 libtcmalloc_minimal.so 00:24:24.464 1 904 libcrypto.so 00:24:24.464 ----------------------------------------------------- 00:24:24.464 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:24.464 10:29:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:24.464 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:24.464 fio-3.35 00:24:24.464 Starting 1 thread 00:24:39.351 00:24:39.351 test: (groupid=0, jobs=1): err= 0: pid=77480: Mon Nov 25 10:29:44 2024 00:24:39.351 read: IOPS=7699, BW=30.1MiB/s (31.5MB/s)(255MiB/8468msec) 00:24:39.351 slat (nsec): min=3409, max=36292, avg=5170.43, stdev=1674.17 00:24:39.351 clat (usec): min=608, max=31373, avg=16614.08, stdev=1283.93 00:24:39.351 lat (usec): min=612, max=31379, avg=16619.25, stdev=1283.94 00:24:39.351 clat percentiles (usec): 00:24:39.351 | 1.00th=[15533], 5.00th=[15664], 10.00th=[15795], 20.00th=[15926], 00:24:39.351 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16319], 60.00th=[16450], 00:24:39.351 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17433], 95.00th=[18482], 00:24:39.351 | 99.00th=[21890], 99.50th=[22676], 99.90th=[28181], 99.95th=[28705], 00:24:39.351 | 99.99th=[30802] 00:24:39.351 write: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(256MiB/4811msec); 0 zone resets 00:24:39.351 slat (usec): min=4, max=729, avg= 7.63, stdev= 6.04 00:24:39.351 clat (usec): min=639, max=58110, avg=9348.75, stdev=11554.39 00:24:39.351 lat (usec): min=645, max=58117, avg=9356.37, stdev=11554.38 00:24:39.351 clat percentiles (usec): 00:24:39.351 | 1.00th=[ 963], 5.00th=[ 1139], 10.00th=[ 1287], 20.00th=[ 1467], 00:24:39.351 | 30.00th=[ 1647], 40.00th=[ 2057], 50.00th=[ 6194], 60.00th=[ 7046], 00:24:39.351 | 70.00th=[ 8029], 80.00th=[ 9765], 90.00th=[33817], 95.00th=[35390], 00:24:39.351 | 99.00th=[38011], 99.50th=[42730], 99.90th=[55313], 99.95th=[56361], 00:24:39.351 | 99.99th=[57410] 00:24:39.351 bw ( KiB/s): min=28800, max=75840, per=96.22%, avg=52428.80, stdev=12829.50, samples=10 00:24:39.351 iops : min= 7200, max=18960, avg=13107.20, stdev=3207.37, samples=10 00:24:39.351 lat (usec) : 750=0.02%, 1000=0.82% 00:24:39.351 lat (msec) : 2=19.05%, 4=1.23%, 10=19.50%, 20=49.77%, 50=9.47% 00:24:39.351 lat (msec) : 100=0.14% 00:24:39.351 cpu : usr=98.97%, sys=0.30%, ctx=20, majf=0, minf=5565 00:24:39.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:39.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:39.351 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:39.351 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:39.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:39.351 00:24:39.351 Run status group 0 (all jobs): 00:24:39.351 READ: bw=30.1MiB/s (31.5MB/s), 30.1MiB/s-30.1MiB/s (31.5MB/s-31.5MB/s), io=255MiB (267MB), run=8468-8468msec 00:24:39.351 WRITE: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=256MiB (268MB), run=4811-4811msec 00:24:39.351 ----------------------------------------------------- 00:24:39.351 Suppressions used: 00:24:39.351 count bytes template 00:24:39.351 1 5 /usr/src/fio/parse.c 00:24:39.351 2 192 /usr/src/fio/iolog.c 00:24:39.351 1 8 libtcmalloc_minimal.so 00:24:39.351 1 904 libcrypto.so 00:24:39.351 ----------------------------------------------------- 00:24:39.351 00:24:39.351 10:29:45 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:39.351 10:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.351 10:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:39.351 10:29:45 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:39.351 10:29:45 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:39.351 Remove shared memory files 00:24:39.351 10:29:45 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:39.351 10:29:45 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:39.351 10:29:45 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:39.351 10:29:46 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57762 /dev/shm/spdk_tgt_trace.pid75715 00:24:39.351 10:29:46 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:39.351 10:29:46 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:39.351 00:24:39.351 real 1m9.929s 00:24:39.351 user 2m33.067s 00:24:39.351 sys 0m3.977s 00:24:39.351 10:29:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:39.351 ************************************ 00:24:39.351 END TEST ftl_fio_basic 00:24:39.351 ************************************ 00:24:39.351 10:29:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:39.351 10:29:46 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:39.351 10:29:46 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:39.351 10:29:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:39.351 10:29:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:39.351 ************************************ 00:24:39.351 START TEST ftl_bdevperf 00:24:39.351 ************************************ 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:39.351 * Looking for test storage... 00:24:39.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.351 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:39.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.352 --rc genhtml_branch_coverage=1 00:24:39.352 --rc genhtml_function_coverage=1 00:24:39.352 --rc genhtml_legend=1 00:24:39.352 --rc geninfo_all_blocks=1 00:24:39.352 --rc geninfo_unexecuted_blocks=1 00:24:39.352 00:24:39.352 ' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:39.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.352 --rc genhtml_branch_coverage=1 00:24:39.352 --rc genhtml_function_coverage=1 00:24:39.352 --rc genhtml_legend=1 00:24:39.352 --rc geninfo_all_blocks=1 00:24:39.352 --rc geninfo_unexecuted_blocks=1 00:24:39.352 00:24:39.352 ' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:39.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.352 --rc genhtml_branch_coverage=1 00:24:39.352 --rc genhtml_function_coverage=1 00:24:39.352 --rc genhtml_legend=1 00:24:39.352 --rc geninfo_all_blocks=1 00:24:39.352 --rc geninfo_unexecuted_blocks=1 00:24:39.352 00:24:39.352 ' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:39.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.352 --rc genhtml_branch_coverage=1 00:24:39.352 --rc genhtml_function_coverage=1 00:24:39.352 --rc genhtml_legend=1 00:24:39.352 --rc geninfo_all_blocks=1 00:24:39.352 --rc geninfo_unexecuted_blocks=1 00:24:39.352 00:24:39.352 ' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77721 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77721 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77721 ']' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.352 10:29:46 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:39.352 [2024-11-25 10:29:46.434446] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:24:39.352 [2024-11-25 10:29:46.434567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77721 ] 00:24:39.611 [2024-11-25 10:29:46.612988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.870 [2024-11-25 10:29:46.732742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.459 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.459 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:40.459 10:29:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:40.459 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:40.459 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:40.459 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:40.459 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:40.459 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:40.717 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:40.717 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:40.717 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:40.717 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:40.717 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:40.717 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:40.717 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:40.717 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:40.974 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:40.974 { 00:24:40.974 "name": "nvme0n1", 00:24:40.974 "aliases": [ 00:24:40.974 "aa79cfa2-1141-45b2-9756-92b4af717956" 00:24:40.974 ], 00:24:40.974 "product_name": "NVMe disk", 00:24:40.974 "block_size": 4096, 00:24:40.974 "num_blocks": 1310720, 00:24:40.974 "uuid": "aa79cfa2-1141-45b2-9756-92b4af717956", 00:24:40.974 "numa_id": -1, 00:24:40.974 "assigned_rate_limits": { 00:24:40.974 "rw_ios_per_sec": 0, 00:24:40.974 "rw_mbytes_per_sec": 0, 00:24:40.974 "r_mbytes_per_sec": 0, 00:24:40.974 "w_mbytes_per_sec": 0 00:24:40.974 }, 00:24:40.974 "claimed": true, 00:24:40.974 "claim_type": "read_many_write_one", 00:24:40.974 "zoned": false, 00:24:40.974 "supported_io_types": { 00:24:40.974 "read": true, 00:24:40.974 "write": true, 00:24:40.974 "unmap": true, 00:24:40.974 "flush": true, 00:24:40.974 "reset": true, 00:24:40.974 "nvme_admin": true, 00:24:40.974 "nvme_io": true, 00:24:40.974 "nvme_io_md": false, 00:24:40.974 "write_zeroes": true, 00:24:40.974 "zcopy": false, 00:24:40.974 "get_zone_info": false, 00:24:40.974 "zone_management": false, 00:24:40.974 "zone_append": false, 00:24:40.974 "compare": true, 00:24:40.974 "compare_and_write": false, 00:24:40.974 "abort": true, 00:24:40.974 "seek_hole": false, 00:24:40.974 "seek_data": false, 00:24:40.974 "copy": true, 00:24:40.974 "nvme_iov_md": false 00:24:40.974 }, 00:24:40.974 "driver_specific": { 00:24:40.974 "nvme": [ 00:24:40.974 { 00:24:40.974 "pci_address": "0000:00:11.0", 00:24:40.974 "trid": { 00:24:40.974 "trtype": "PCIe", 00:24:40.974 "traddr": "0000:00:11.0" 00:24:40.974 }, 00:24:40.974 "ctrlr_data": { 00:24:40.974 "cntlid": 0, 00:24:40.974 "vendor_id": "0x1b36", 00:24:40.974 "model_number": "QEMU NVMe Ctrl", 00:24:40.974 "serial_number": "12341", 00:24:40.974 "firmware_revision": "8.0.0", 00:24:40.974 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:40.974 "oacs": { 00:24:40.974 "security": 0, 00:24:40.974 "format": 1, 00:24:40.974 "firmware": 0, 00:24:40.974 "ns_manage": 1 00:24:40.974 }, 00:24:40.974 "multi_ctrlr": false, 00:24:40.974 "ana_reporting": false 00:24:40.974 }, 00:24:40.974 "vs": { 00:24:40.974 "nvme_version": "1.4" 00:24:40.974 }, 00:24:40.974 "ns_data": { 00:24:40.974 "id": 1, 00:24:40.975 "can_share": false 00:24:40.975 } 00:24:40.975 } 00:24:40.975 ], 00:24:40.975 "mp_policy": "active_passive" 00:24:40.975 } 00:24:40.975 } 00:24:40.975 ]' 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:40.975 10:29:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:41.233 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=999d0273-e7c7-4ec1-accd-76239cf6d4f9 00:24:41.233 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:41.233 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 999d0273-e7c7-4ec1-accd-76239cf6d4f9 00:24:41.492 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:41.492 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=a4668ade-504e-4ad8-964c-f5e3c2b99b61 00:24:41.492 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a4668ade-504e-4ad8-964c-f5e3c2b99b61 00:24:41.750 10:29:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:41.750 10:29:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:41.750 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:41.750 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:41.750 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:41.751 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:41.751 10:29:48 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:41.751 10:29:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:41.751 10:29:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:41.751 10:29:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:41.751 10:29:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:41.751 10:29:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:42.009 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:42.009 { 00:24:42.009 "name": "f414d36d-97f8-4b43-98c9-8c6581a5479d", 00:24:42.009 "aliases": [ 00:24:42.009 "lvs/nvme0n1p0" 00:24:42.009 ], 00:24:42.009 "product_name": "Logical Volume", 00:24:42.009 "block_size": 4096, 00:24:42.009 "num_blocks": 26476544, 00:24:42.009 "uuid": "f414d36d-97f8-4b43-98c9-8c6581a5479d", 00:24:42.009 "assigned_rate_limits": { 00:24:42.009 "rw_ios_per_sec": 0, 00:24:42.009 "rw_mbytes_per_sec": 0, 00:24:42.009 "r_mbytes_per_sec": 0, 00:24:42.009 "w_mbytes_per_sec": 0 00:24:42.009 }, 00:24:42.009 "claimed": false, 00:24:42.009 "zoned": false, 00:24:42.009 "supported_io_types": { 00:24:42.009 "read": true, 00:24:42.009 "write": true, 00:24:42.009 "unmap": true, 00:24:42.009 "flush": false, 00:24:42.009 "reset": true, 00:24:42.009 "nvme_admin": false, 00:24:42.009 "nvme_io": false, 00:24:42.009 "nvme_io_md": false, 00:24:42.009 "write_zeroes": true, 00:24:42.009 "zcopy": false, 00:24:42.009 "get_zone_info": false, 00:24:42.009 "zone_management": false, 00:24:42.009 "zone_append": false, 00:24:42.009 "compare": false, 00:24:42.009 "compare_and_write": false, 00:24:42.009 "abort": false, 00:24:42.009 "seek_hole": true, 00:24:42.009 "seek_data": true, 00:24:42.009 "copy": false, 00:24:42.009 "nvme_iov_md": false 00:24:42.009 }, 00:24:42.009 "driver_specific": { 00:24:42.009 "lvol": { 00:24:42.009 "lvol_store_uuid": "a4668ade-504e-4ad8-964c-f5e3c2b99b61", 00:24:42.009 "base_bdev": "nvme0n1", 00:24:42.009 "thin_provision": true, 00:24:42.009 "num_allocated_clusters": 0, 00:24:42.009 "snapshot": false, 00:24:42.009 "clone": false, 00:24:42.009 "esnap_clone": false 00:24:42.009 } 00:24:42.009 } 00:24:42.009 } 00:24:42.009 ]' 00:24:42.009 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:42.009 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:42.009 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:42.009 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:42.009 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:42.009 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:42.009 10:29:49 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:42.009 10:29:49 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:42.010 10:29:49 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:42.271 10:29:49 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:42.271 10:29:49 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:42.531 10:29:49 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:42.531 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:42.531 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:42.531 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:42.531 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:42.531 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:42.531 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:42.531 { 00:24:42.531 "name": "f414d36d-97f8-4b43-98c9-8c6581a5479d", 00:24:42.531 "aliases": [ 00:24:42.531 "lvs/nvme0n1p0" 00:24:42.531 ], 00:24:42.531 "product_name": "Logical Volume", 00:24:42.531 "block_size": 4096, 00:24:42.531 "num_blocks": 26476544, 00:24:42.531 "uuid": "f414d36d-97f8-4b43-98c9-8c6581a5479d", 00:24:42.531 "assigned_rate_limits": { 00:24:42.531 "rw_ios_per_sec": 0, 00:24:42.531 "rw_mbytes_per_sec": 0, 00:24:42.531 "r_mbytes_per_sec": 0, 00:24:42.531 "w_mbytes_per_sec": 0 00:24:42.531 }, 00:24:42.531 "claimed": false, 00:24:42.531 "zoned": false, 00:24:42.531 "supported_io_types": { 00:24:42.531 "read": true, 00:24:42.531 "write": true, 00:24:42.531 "unmap": true, 00:24:42.531 "flush": false, 00:24:42.531 "reset": true, 00:24:42.531 "nvme_admin": false, 00:24:42.531 "nvme_io": false, 00:24:42.531 "nvme_io_md": false, 00:24:42.531 "write_zeroes": true, 00:24:42.531 "zcopy": false, 00:24:42.531 "get_zone_info": false, 00:24:42.531 "zone_management": false, 00:24:42.531 "zone_append": false, 00:24:42.531 "compare": false, 00:24:42.531 "compare_and_write": false, 00:24:42.531 "abort": false, 00:24:42.531 "seek_hole": true, 00:24:42.531 "seek_data": true, 00:24:42.531 "copy": false, 00:24:42.531 "nvme_iov_md": false 00:24:42.531 }, 00:24:42.531 "driver_specific": { 00:24:42.531 "lvol": { 00:24:42.531 "lvol_store_uuid": "a4668ade-504e-4ad8-964c-f5e3c2b99b61", 00:24:42.531 "base_bdev": "nvme0n1", 00:24:42.531 "thin_provision": true, 00:24:42.531 "num_allocated_clusters": 0, 00:24:42.531 "snapshot": false, 00:24:42.531 "clone": false, 00:24:42.531 "esnap_clone": false 00:24:42.531 } 00:24:42.531 } 00:24:42.531 } 00:24:42.531 ]' 00:24:42.531 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:42.789 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:42.789 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:42.789 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:42.789 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:42.789 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:42.789 10:29:49 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:42.789 10:29:49 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:43.048 10:29:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:43.048 10:29:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:43.048 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:43.048 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:43.048 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:43.048 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:43.048 10:29:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f414d36d-97f8-4b43-98c9-8c6581a5479d 00:24:43.048 10:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:43.048 { 00:24:43.048 "name": "f414d36d-97f8-4b43-98c9-8c6581a5479d", 00:24:43.048 "aliases": [ 00:24:43.048 "lvs/nvme0n1p0" 00:24:43.048 ], 00:24:43.048 "product_name": "Logical Volume", 00:24:43.048 "block_size": 4096, 00:24:43.048 "num_blocks": 26476544, 00:24:43.048 "uuid": "f414d36d-97f8-4b43-98c9-8c6581a5479d", 00:24:43.048 "assigned_rate_limits": { 00:24:43.048 "rw_ios_per_sec": 0, 00:24:43.048 "rw_mbytes_per_sec": 0, 00:24:43.048 "r_mbytes_per_sec": 0, 00:24:43.048 "w_mbytes_per_sec": 0 00:24:43.048 }, 00:24:43.048 "claimed": false, 00:24:43.048 "zoned": false, 00:24:43.048 "supported_io_types": { 00:24:43.048 "read": true, 00:24:43.048 "write": true, 00:24:43.048 "unmap": true, 00:24:43.048 "flush": false, 00:24:43.048 "reset": true, 00:24:43.048 "nvme_admin": false, 00:24:43.048 "nvme_io": false, 00:24:43.048 "nvme_io_md": false, 00:24:43.048 "write_zeroes": true, 00:24:43.048 "zcopy": false, 00:24:43.048 "get_zone_info": false, 00:24:43.048 "zone_management": false, 00:24:43.048 "zone_append": false, 00:24:43.048 "compare": false, 00:24:43.048 "compare_and_write": false, 00:24:43.048 "abort": false, 00:24:43.048 "seek_hole": true, 00:24:43.048 "seek_data": true, 00:24:43.048 "copy": false, 00:24:43.048 "nvme_iov_md": false 00:24:43.048 }, 00:24:43.048 "driver_specific": { 00:24:43.048 "lvol": { 00:24:43.048 "lvol_store_uuid": "a4668ade-504e-4ad8-964c-f5e3c2b99b61", 00:24:43.048 "base_bdev": "nvme0n1", 00:24:43.048 "thin_provision": true, 00:24:43.048 "num_allocated_clusters": 0, 00:24:43.048 "snapshot": false, 00:24:43.048 "clone": false, 00:24:43.048 "esnap_clone": false 00:24:43.048 } 00:24:43.048 } 00:24:43.048 } 00:24:43.048 ]' 00:24:43.048 10:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:43.307 10:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:43.307 10:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:43.307 10:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:43.307 10:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:43.307 10:29:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:43.308 10:29:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:43.308 10:29:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f414d36d-97f8-4b43-98c9-8c6581a5479d -c nvc0n1p0 --l2p_dram_limit 20 00:24:43.308 [2024-11-25 10:29:50.394565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.394617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:43.308 [2024-11-25 10:29:50.394633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:43.308 [2024-11-25 10:29:50.394648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.394708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.394727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.308 [2024-11-25 10:29:50.394738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:43.308 [2024-11-25 10:29:50.394751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.394772] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:43.308 [2024-11-25 10:29:50.395914] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:43.308 [2024-11-25 10:29:50.395937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.395951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.308 [2024-11-25 10:29:50.395963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.173 ms 00:24:43.308 [2024-11-25 10:29:50.395976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.396056] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 49a624d9-2664-4db5-8ab8-ca0603b98135 00:24:43.308 [2024-11-25 10:29:50.397464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.397488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:43.308 [2024-11-25 10:29:50.397520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:43.308 [2024-11-25 10:29:50.397530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.405297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.405326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.308 [2024-11-25 10:29:50.405342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.729 ms 00:24:43.308 [2024-11-25 10:29:50.405352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.405466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.405495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.308 [2024-11-25 10:29:50.405513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:43.308 [2024-11-25 10:29:50.405533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.405593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.405606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:43.308 [2024-11-25 10:29:50.405619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:43.308 [2024-11-25 10:29:50.405630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.405656] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:43.308 [2024-11-25 10:29:50.410919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.410949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.308 [2024-11-25 10:29:50.410961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.280 ms 00:24:43.308 [2024-11-25 10:29:50.410993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.411030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.411047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:43.308 [2024-11-25 10:29:50.411058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:43.308 [2024-11-25 10:29:50.411072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.411114] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:43.308 [2024-11-25 10:29:50.411243] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:43.308 [2024-11-25 10:29:50.411257] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:43.308 [2024-11-25 10:29:50.411273] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:43.308 [2024-11-25 10:29:50.411285] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:43.308 [2024-11-25 10:29:50.411300] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:43.308 [2024-11-25 10:29:50.411311] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:43.308 [2024-11-25 10:29:50.411323] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:43.308 [2024-11-25 10:29:50.411333] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:43.308 [2024-11-25 10:29:50.411345] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:43.308 [2024-11-25 10:29:50.411356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.411378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:43.308 [2024-11-25 10:29:50.411389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:24:43.308 [2024-11-25 10:29:50.411404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.411474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.308 [2024-11-25 10:29:50.411506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:43.308 [2024-11-25 10:29:50.411518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:43.308 [2024-11-25 10:29:50.411532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.308 [2024-11-25 10:29:50.411609] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:43.308 [2024-11-25 10:29:50.411624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:43.308 [2024-11-25 10:29:50.411637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.308 [2024-11-25 10:29:50.411651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.308 [2024-11-25 10:29:50.411661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:43.308 [2024-11-25 10:29:50.411672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:43.308 [2024-11-25 10:29:50.411682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:43.308 [2024-11-25 10:29:50.411694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:43.308 [2024-11-25 10:29:50.411703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:43.308 [2024-11-25 10:29:50.411714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.308 [2024-11-25 10:29:50.411723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:43.308 [2024-11-25 10:29:50.411747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:43.308 [2024-11-25 10:29:50.411757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.308 [2024-11-25 10:29:50.411769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:43.308 [2024-11-25 10:29:50.411779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:43.308 [2024-11-25 10:29:50.411794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.308 [2024-11-25 10:29:50.411803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:43.308 [2024-11-25 10:29:50.411815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:43.308 [2024-11-25 10:29:50.411824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.308 [2024-11-25 10:29:50.411838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:43.308 [2024-11-25 10:29:50.411847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:43.308 [2024-11-25 10:29:50.411858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.308 [2024-11-25 10:29:50.411868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:43.308 [2024-11-25 10:29:50.411879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:43.308 [2024-11-25 10:29:50.411888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.308 [2024-11-25 10:29:50.411899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:43.308 [2024-11-25 10:29:50.411908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:43.308 [2024-11-25 10:29:50.411920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.308 [2024-11-25 10:29:50.411929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:43.308 [2024-11-25 10:29:50.411940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:43.308 [2024-11-25 10:29:50.411949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.308 [2024-11-25 10:29:50.411962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:43.308 [2024-11-25 10:29:50.411971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:43.308 [2024-11-25 10:29:50.411983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.308 [2024-11-25 10:29:50.411992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:43.308 [2024-11-25 10:29:50.412003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:43.308 [2024-11-25 10:29:50.412012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.308 [2024-11-25 10:29:50.412023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:43.308 [2024-11-25 10:29:50.412033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:43.308 [2024-11-25 10:29:50.412044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.308 [2024-11-25 10:29:50.412053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:43.308 [2024-11-25 10:29:50.412064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:43.308 [2024-11-25 10:29:50.412072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.309 [2024-11-25 10:29:50.412084] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:43.309 [2024-11-25 10:29:50.412093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:43.309 [2024-11-25 10:29:50.412107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.309 [2024-11-25 10:29:50.412117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.309 [2024-11-25 10:29:50.412133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:43.309 [2024-11-25 10:29:50.412143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:43.309 [2024-11-25 10:29:50.412155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:43.309 [2024-11-25 10:29:50.412164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:43.309 [2024-11-25 10:29:50.412176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:43.309 [2024-11-25 10:29:50.412185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:43.309 [2024-11-25 10:29:50.412200] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:43.309 [2024-11-25 10:29:50.412213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.309 [2024-11-25 10:29:50.412227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:43.309 [2024-11-25 10:29:50.412237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:43.309 [2024-11-25 10:29:50.412250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:43.309 [2024-11-25 10:29:50.412260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:43.309 [2024-11-25 10:29:50.412273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:43.309 [2024-11-25 10:29:50.412283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:43.309 [2024-11-25 10:29:50.412295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:43.309 [2024-11-25 10:29:50.412306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:43.309 [2024-11-25 10:29:50.412321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:43.309 [2024-11-25 10:29:50.412332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:43.309 [2024-11-25 10:29:50.412344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:43.309 [2024-11-25 10:29:50.412354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:43.309 [2024-11-25 10:29:50.412367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:43.309 [2024-11-25 10:29:50.412377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:43.309 [2024-11-25 10:29:50.412389] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:43.309 [2024-11-25 10:29:50.412400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.309 [2024-11-25 10:29:50.412418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:43.309 [2024-11-25 10:29:50.412428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:43.309 [2024-11-25 10:29:50.412440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:43.309 [2024-11-25 10:29:50.412451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:43.309 [2024-11-25 10:29:50.412464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.309 [2024-11-25 10:29:50.412475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:43.309 [2024-11-25 10:29:50.412506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:24:43.309 [2024-11-25 10:29:50.412517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.309 [2024-11-25 10:29:50.412557] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:43.309 [2024-11-25 10:29:50.412570] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:47.500 [2024-11-25 10:29:53.867536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.500 [2024-11-25 10:29:53.867601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:47.501 [2024-11-25 10:29:53.867620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3460.586 ms 00:24:47.501 [2024-11-25 10:29:53.867631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:53.905826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:53.905873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.501 [2024-11-25 10:29:53.905892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.970 ms 00:24:47.501 [2024-11-25 10:29:53.905903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:53.906073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:53.906089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:47.501 [2024-11-25 10:29:53.906105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:47.501 [2024-11-25 10:29:53.906116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:53.964268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:53.964315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.501 [2024-11-25 10:29:53.964331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.199 ms 00:24:47.501 [2024-11-25 10:29:53.964342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:53.964390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:53.964401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.501 [2024-11-25 10:29:53.964414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:47.501 [2024-11-25 10:29:53.964427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:53.964906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:53.964925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.501 [2024-11-25 10:29:53.964939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:24:47.501 [2024-11-25 10:29:53.964949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:53.965061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:53.965075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.501 [2024-11-25 10:29:53.965090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:47.501 [2024-11-25 10:29:53.965100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:53.982950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:53.982998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.501 [2024-11-25 10:29:53.983016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.854 ms 00:24:47.501 [2024-11-25 10:29:53.983039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:53.996152] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:24:47.501 [2024-11-25 10:29:54.002011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.002054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:47.501 [2024-11-25 10:29:54.002069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.874 ms 00:24:47.501 [2024-11-25 10:29:54.002082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.086792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.086859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:47.501 [2024-11-25 10:29:54.086875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.805 ms 00:24:47.501 [2024-11-25 10:29:54.086889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.087069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.087088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:47.501 [2024-11-25 10:29:54.087099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:24:47.501 [2024-11-25 10:29:54.087115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.123578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.123626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:47.501 [2024-11-25 10:29:54.123641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.448 ms 00:24:47.501 [2024-11-25 10:29:54.123654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.159245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.159289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:47.501 [2024-11-25 10:29:54.159305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.608 ms 00:24:47.501 [2024-11-25 10:29:54.159317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.159992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.160015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:47.501 [2024-11-25 10:29:54.160028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:24:47.501 [2024-11-25 10:29:54.160041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.258918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.258973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:47.501 [2024-11-25 10:29:54.258989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.983 ms 00:24:47.501 [2024-11-25 10:29:54.259003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.296236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.296282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:47.501 [2024-11-25 10:29:54.296300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.213 ms 00:24:47.501 [2024-11-25 10:29:54.296313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.333522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.333565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:47.501 [2024-11-25 10:29:54.333579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.227 ms 00:24:47.501 [2024-11-25 10:29:54.333591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.369592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.369635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:47.501 [2024-11-25 10:29:54.369649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.019 ms 00:24:47.501 [2024-11-25 10:29:54.369662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.369703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.369721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:47.501 [2024-11-25 10:29:54.369731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:47.501 [2024-11-25 10:29:54.369744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.369840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.501 [2024-11-25 10:29:54.369855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:47.501 [2024-11-25 10:29:54.369865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:47.501 [2024-11-25 10:29:54.369877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.501 [2024-11-25 10:29:54.370907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3982.360 ms, result 0 00:24:47.501 { 00:24:47.501 "name": "ftl0", 00:24:47.501 "uuid": "49a624d9-2664-4db5-8ab8-ca0603b98135" 00:24:47.501 } 00:24:47.501 10:29:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:24:47.501 10:29:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:24:47.501 10:29:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:24:47.502 10:29:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:24:47.760 [2024-11-25 10:29:54.707039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:47.760 I/O size of 69632 is greater than zero copy threshold (65536). 00:24:47.760 Zero copy mechanism will not be used. 00:24:47.760 Running I/O for 4 seconds... 00:24:49.634 1800.00 IOPS, 119.53 MiB/s [2024-11-25T10:29:58.121Z] 1811.00 IOPS, 120.26 MiB/s [2024-11-25T10:29:59.057Z] 1799.67 IOPS, 119.51 MiB/s [2024-11-25T10:29:59.057Z] 1782.50 IOPS, 118.37 MiB/s 00:24:51.945 Latency(us) 00:24:51.945 [2024-11-25T10:29:59.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.945 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:24:51.945 ftl0 : 4.00 1781.88 118.33 0.00 0.00 591.88 199.87 9633.00 00:24:51.945 [2024-11-25T10:29:59.057Z] =================================================================================================================== 00:24:51.945 [2024-11-25T10:29:59.057Z] Total : 1781.88 118.33 0.00 0.00 591.88 199.87 9633.00 00:24:51.945 [2024-11-25 10:29:58.712239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:51.945 { 00:24:51.945 "results": [ 00:24:51.945 { 00:24:51.945 "job": "ftl0", 00:24:51.945 "core_mask": "0x1", 00:24:51.945 "workload": "randwrite", 00:24:51.945 "status": "finished", 00:24:51.945 "queue_depth": 1, 00:24:51.945 "io_size": 69632, 00:24:51.945 "runtime": 4.001942, 00:24:51.945 "iops": 1781.8848948835341, 00:24:51.945 "mibps": 118.32829380085968, 00:24:51.945 "io_failed": 0, 00:24:51.945 "io_timeout": 0, 00:24:51.945 "avg_latency_us": 591.8846401170522, 00:24:51.945 "min_latency_us": 199.86506024096386, 00:24:51.945 "max_latency_us": 9633.002409638555 00:24:51.945 } 00:24:51.945 ], 00:24:51.945 "core_count": 1 00:24:51.945 } 00:24:51.945 10:29:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:24:51.945 [2024-11-25 10:29:58.817622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:51.945 Running I/O for 4 seconds... 00:24:53.818 11301.00 IOPS, 44.14 MiB/s [2024-11-25T10:30:01.915Z] 11144.00 IOPS, 43.53 MiB/s [2024-11-25T10:30:02.851Z] 10469.33 IOPS, 40.90 MiB/s [2024-11-25T10:30:02.851Z] 10051.25 IOPS, 39.26 MiB/s 00:24:55.739 Latency(us) 00:24:55.739 [2024-11-25T10:30:02.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.739 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:24:55.739 ftl0 : 4.02 10022.92 39.15 0.00 0.00 12750.65 241.81 72431.76 00:24:55.739 [2024-11-25T10:30:02.851Z] =================================================================================================================== 00:24:55.739 [2024-11-25T10:30:02.851Z] Total : 10022.92 39.15 0.00 0.00 12750.65 0.00 72431.76 00:24:55.739 [2024-11-25 10:30:02.845773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:55.998 { 00:24:55.998 "results": [ 00:24:55.998 { 00:24:55.998 "job": "ftl0", 00:24:55.998 "core_mask": "0x1", 00:24:55.998 "workload": "randwrite", 00:24:55.998 "status": "finished", 00:24:55.998 "queue_depth": 128, 00:24:55.998 "io_size": 4096, 00:24:55.998 "runtime": 4.022481, 00:24:55.998 "iops": 10022.91869122564, 00:24:55.998 "mibps": 39.15202613760015, 00:24:55.998 "io_failed": 0, 00:24:55.998 "io_timeout": 0, 00:24:55.998 "avg_latency_us": 12750.653162681732, 00:24:55.998 "min_latency_us": 241.81204819277107, 00:24:55.998 "max_latency_us": 72431.75582329318 00:24:55.998 } 00:24:55.998 ], 00:24:55.998 "core_count": 1 00:24:55.998 } 00:24:55.998 10:30:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:24:55.998 [2024-11-25 10:30:03.011905] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:55.998 Running I/O for 4 seconds... 00:24:58.312 8260.00 IOPS, 32.27 MiB/s [2024-11-25T10:30:06.363Z] 8226.00 IOPS, 32.13 MiB/s [2024-11-25T10:30:07.298Z] 8226.33 IOPS, 32.13 MiB/s [2024-11-25T10:30:07.298Z] 8190.00 IOPS, 31.99 MiB/s 00:25:00.186 Latency(us) 00:25:00.186 [2024-11-25T10:30:07.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.186 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:00.186 Verification LBA range: start 0x0 length 0x1400000 00:25:00.186 ftl0 : 4.01 8201.48 32.04 0.00 0.00 15560.43 269.78 32004.73 00:25:00.186 [2024-11-25T10:30:07.298Z] =================================================================================================================== 00:25:00.186 [2024-11-25T10:30:07.298Z] Total : 8201.48 32.04 0.00 0.00 15560.43 0.00 32004.73 00:25:00.186 [2024-11-25 10:30:07.035001] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:00.186 { 00:25:00.186 "results": [ 00:25:00.186 { 00:25:00.186 "job": "ftl0", 00:25:00.186 "core_mask": "0x1", 00:25:00.186 "workload": "verify", 00:25:00.186 "status": "finished", 00:25:00.186 "verify_range": { 00:25:00.186 "start": 0, 00:25:00.186 "length": 20971520 00:25:00.186 }, 00:25:00.186 "queue_depth": 128, 00:25:00.186 "io_size": 4096, 00:25:00.186 "runtime": 4.009884, 00:25:00.186 "iops": 8201.484132708078, 00:25:00.186 "mibps": 32.03704739339093, 00:25:00.186 "io_failed": 0, 00:25:00.186 "io_timeout": 0, 00:25:00.186 "avg_latency_us": 15560.432977716197, 00:25:00.186 "min_latency_us": 269.7767068273092, 00:25:00.186 "max_latency_us": 32004.729317269077 00:25:00.186 } 00:25:00.186 ], 00:25:00.186 "core_count": 1 00:25:00.186 } 00:25:00.186 10:30:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:25:00.186 [2024-11-25 10:30:07.262744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.186 [2024-11-25 10:30:07.262939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:00.186 [2024-11-25 10:30:07.263031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:00.186 [2024-11-25 10:30:07.263073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.186 [2024-11-25 10:30:07.263176] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:00.186 [2024-11-25 10:30:07.267422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.186 [2024-11-25 10:30:07.267576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:00.186 [2024-11-25 10:30:07.267668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.185 ms 00:25:00.186 [2024-11-25 10:30:07.267705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.186 [2024-11-25 10:30:07.269646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.186 [2024-11-25 10:30:07.269789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:00.186 [2024-11-25 10:30:07.269887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:25:00.186 [2024-11-25 10:30:07.269930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.445 [2024-11-25 10:30:07.476127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.445 [2024-11-25 10:30:07.476335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:00.445 [2024-11-25 10:30:07.476459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 206.430 ms 00:25:00.445 [2024-11-25 10:30:07.476500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.445 [2024-11-25 10:30:07.481695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.445 [2024-11-25 10:30:07.481828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:00.445 [2024-11-25 10:30:07.481940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.104 ms 00:25:00.445 [2024-11-25 10:30:07.481957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.445 [2024-11-25 10:30:07.518069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.445 [2024-11-25 10:30:07.518113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:00.445 [2024-11-25 10:30:07.518130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.108 ms 00:25:00.445 [2024-11-25 10:30:07.518140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.445 [2024-11-25 10:30:07.540143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.445 [2024-11-25 10:30:07.540191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:00.445 [2024-11-25 10:30:07.540208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.995 ms 00:25:00.445 [2024-11-25 10:30:07.540219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.445 [2024-11-25 10:30:07.540373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.445 [2024-11-25 10:30:07.540387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:00.445 [2024-11-25 10:30:07.540404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:25:00.445 [2024-11-25 10:30:07.540414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.705 [2024-11-25 10:30:07.575990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.705 [2024-11-25 10:30:07.576034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:00.705 [2024-11-25 10:30:07.576050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.611 ms 00:25:00.705 [2024-11-25 10:30:07.576061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.705 [2024-11-25 10:30:07.611377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.705 [2024-11-25 10:30:07.611419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:00.705 [2024-11-25 10:30:07.611435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.328 ms 00:25:00.705 [2024-11-25 10:30:07.611445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.705 [2024-11-25 10:30:07.646434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.705 [2024-11-25 10:30:07.646610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:00.705 [2024-11-25 10:30:07.646637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.000 ms 00:25:00.705 [2024-11-25 10:30:07.646647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.705 [2024-11-25 10:30:07.681959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.705 [2024-11-25 10:30:07.682144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:00.705 [2024-11-25 10:30:07.682175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.250 ms 00:25:00.705 [2024-11-25 10:30:07.682186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.705 [2024-11-25 10:30:07.682231] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:00.705 [2024-11-25 10:30:07.682248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:00.705 [2024-11-25 10:30:07.682423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.682995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:00.706 [2024-11-25 10:30:07.683443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:00.707 [2024-11-25 10:30:07.683453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:00.707 [2024-11-25 10:30:07.683468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:00.707 [2024-11-25 10:30:07.683479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:00.707 [2024-11-25 10:30:07.683500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:00.707 [2024-11-25 10:30:07.683518] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:00.707 [2024-11-25 10:30:07.683532] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49a624d9-2664-4db5-8ab8-ca0603b98135 00:25:00.707 [2024-11-25 10:30:07.683543] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:00.707 [2024-11-25 10:30:07.683559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:00.707 [2024-11-25 10:30:07.683569] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:00.707 [2024-11-25 10:30:07.683582] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:00.707 [2024-11-25 10:30:07.683591] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:00.707 [2024-11-25 10:30:07.683605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:00.707 [2024-11-25 10:30:07.683615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:00.707 [2024-11-25 10:30:07.683629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:00.707 [2024-11-25 10:30:07.683638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:00.707 [2024-11-25 10:30:07.683657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.707 [2024-11-25 10:30:07.683667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:00.707 [2024-11-25 10:30:07.683681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:25:00.707 [2024-11-25 10:30:07.683691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.707 [2024-11-25 10:30:07.704014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.707 [2024-11-25 10:30:07.704079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:00.707 [2024-11-25 10:30:07.704097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.283 ms 00:25:00.707 [2024-11-25 10:30:07.704108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.707 [2024-11-25 10:30:07.704719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.707 [2024-11-25 10:30:07.704732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:00.707 [2024-11-25 10:30:07.704746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:25:00.707 [2024-11-25 10:30:07.704756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.707 [2024-11-25 10:30:07.760351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.707 [2024-11-25 10:30:07.760702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.707 [2024-11-25 10:30:07.760748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.707 [2024-11-25 10:30:07.760765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.707 [2024-11-25 10:30:07.760855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.707 [2024-11-25 10:30:07.760867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.707 [2024-11-25 10:30:07.760880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.707 [2024-11-25 10:30:07.760890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.707 [2024-11-25 10:30:07.761077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.707 [2024-11-25 10:30:07.761096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.707 [2024-11-25 10:30:07.761110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.707 [2024-11-25 10:30:07.761119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.707 [2024-11-25 10:30:07.761141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.707 [2024-11-25 10:30:07.761151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.707 [2024-11-25 10:30:07.761164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.707 [2024-11-25 10:30:07.761174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.966 [2024-11-25 10:30:07.886082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.966 [2024-11-25 10:30:07.886158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.966 [2024-11-25 10:30:07.886180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.966 [2024-11-25 10:30:07.886191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.966 [2024-11-25 10:30:07.989699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.966 [2024-11-25 10:30:07.989764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.966 [2024-11-25 10:30:07.989781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.966 [2024-11-25 10:30:07.989792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.966 [2024-11-25 10:30:07.989908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.966 [2024-11-25 10:30:07.989925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.966 [2024-11-25 10:30:07.989938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.966 [2024-11-25 10:30:07.989947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.966 [2024-11-25 10:30:07.990004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.966 [2024-11-25 10:30:07.990016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.966 [2024-11-25 10:30:07.990029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.966 [2024-11-25 10:30:07.990039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.966 [2024-11-25 10:30:07.990163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.966 [2024-11-25 10:30:07.990176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.966 [2024-11-25 10:30:07.990196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.966 [2024-11-25 10:30:07.990207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.966 [2024-11-25 10:30:07.990246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.966 [2024-11-25 10:30:07.990258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:00.966 [2024-11-25 10:30:07.990271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.966 [2024-11-25 10:30:07.990282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.966 [2024-11-25 10:30:07.990322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.966 [2024-11-25 10:30:07.990333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:00.966 [2024-11-25 10:30:07.990350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.966 [2024-11-25 10:30:07.990370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.966 [2024-11-25 10:30:07.990413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.966 [2024-11-25 10:30:07.990425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:00.966 [2024-11-25 10:30:07.990438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.966 [2024-11-25 10:30:07.990448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.966 [2024-11-25 10:30:07.990602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 728.995 ms, result 0 00:25:00.966 true 00:25:00.966 10:30:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77721 00:25:00.966 10:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77721 ']' 00:25:00.966 10:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77721 00:25:00.966 10:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:00.966 10:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.966 10:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77721 00:25:00.966 killing process with pid 77721 00:25:00.966 Received shutdown signal, test time was about 4.000000 seconds 00:25:00.966 00:25:00.967 Latency(us) 00:25:00.967 [2024-11-25T10:30:08.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.967 [2024-11-25T10:30:08.079Z] =================================================================================================================== 00:25:00.967 [2024-11-25T10:30:08.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.967 10:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.967 10:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.967 10:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77721' 00:25:00.967 10:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77721 00:25:00.967 10:30:08 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77721 00:25:02.344 Remove shared memory files 00:25:02.344 10:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:02.344 10:30:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:25:02.344 10:30:09 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:02.344 10:30:09 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:25:02.344 10:30:09 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:25:02.344 10:30:09 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:25:02.344 10:30:09 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:02.344 10:30:09 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:25:02.344 ************************************ 00:25:02.344 END TEST ftl_bdevperf 00:25:02.344 ************************************ 00:25:02.344 00:25:02.344 real 0m23.123s 00:25:02.344 user 0m25.919s 00:25:02.344 sys 0m1.189s 00:25:02.344 10:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.344 10:30:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:02.344 10:30:09 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:02.344 10:30:09 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:02.344 10:30:09 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.344 10:30:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:02.344 ************************************ 00:25:02.344 START TEST ftl_trim 00:25:02.344 ************************************ 00:25:02.344 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:02.344 * Looking for test storage... 00:25:02.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:02.344 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:02.344 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:25:02.344 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:02.604 10:30:09 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:02.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.604 --rc genhtml_branch_coverage=1 00:25:02.604 --rc genhtml_function_coverage=1 00:25:02.604 --rc genhtml_legend=1 00:25:02.604 --rc geninfo_all_blocks=1 00:25:02.604 --rc geninfo_unexecuted_blocks=1 00:25:02.604 00:25:02.604 ' 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:02.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.604 --rc genhtml_branch_coverage=1 00:25:02.604 --rc genhtml_function_coverage=1 00:25:02.604 --rc genhtml_legend=1 00:25:02.604 --rc geninfo_all_blocks=1 00:25:02.604 --rc geninfo_unexecuted_blocks=1 00:25:02.604 00:25:02.604 ' 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:02.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.604 --rc genhtml_branch_coverage=1 00:25:02.604 --rc genhtml_function_coverage=1 00:25:02.604 --rc genhtml_legend=1 00:25:02.604 --rc geninfo_all_blocks=1 00:25:02.604 --rc geninfo_unexecuted_blocks=1 00:25:02.604 00:25:02.604 ' 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:02.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:02.604 --rc genhtml_branch_coverage=1 00:25:02.604 --rc genhtml_function_coverage=1 00:25:02.604 --rc genhtml_legend=1 00:25:02.604 --rc geninfo_all_blocks=1 00:25:02.604 --rc geninfo_unexecuted_blocks=1 00:25:02.604 00:25:02.604 ' 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78081 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78081 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78081 ']' 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.604 10:30:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:02.604 10:30:09 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:02.604 [2024-11-25 10:30:09.652240] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:25:02.604 [2024-11-25 10:30:09.652362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78081 ] 00:25:02.864 [2024-11-25 10:30:09.833560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:02.864 [2024-11-25 10:30:09.951570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.864 [2024-11-25 10:30:09.951694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.864 [2024-11-25 10:30:09.951729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.801 10:30:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.801 10:30:10 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:03.801 10:30:10 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:03.801 10:30:10 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:25:03.801 10:30:10 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:03.801 10:30:10 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:25:03.801 10:30:10 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:25:03.801 10:30:10 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:04.060 10:30:11 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:04.060 10:30:11 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:25:04.060 10:30:11 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:04.060 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:04.060 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:04.060 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:04.060 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:04.060 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:04.319 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:04.319 { 00:25:04.319 "name": "nvme0n1", 00:25:04.319 "aliases": [ 00:25:04.319 "970026d8-00b3-47f1-be88-3dc090cf4c08" 00:25:04.319 ], 00:25:04.319 "product_name": "NVMe disk", 00:25:04.319 "block_size": 4096, 00:25:04.319 "num_blocks": 1310720, 00:25:04.319 "uuid": "970026d8-00b3-47f1-be88-3dc090cf4c08", 00:25:04.319 "numa_id": -1, 00:25:04.319 "assigned_rate_limits": { 00:25:04.319 "rw_ios_per_sec": 0, 00:25:04.319 "rw_mbytes_per_sec": 0, 00:25:04.319 "r_mbytes_per_sec": 0, 00:25:04.319 "w_mbytes_per_sec": 0 00:25:04.319 }, 00:25:04.319 "claimed": true, 00:25:04.319 "claim_type": "read_many_write_one", 00:25:04.319 "zoned": false, 00:25:04.319 "supported_io_types": { 00:25:04.319 "read": true, 00:25:04.319 "write": true, 00:25:04.319 "unmap": true, 00:25:04.319 "flush": true, 00:25:04.319 "reset": true, 00:25:04.319 "nvme_admin": true, 00:25:04.319 "nvme_io": true, 00:25:04.319 "nvme_io_md": false, 00:25:04.319 "write_zeroes": true, 00:25:04.319 "zcopy": false, 00:25:04.319 "get_zone_info": false, 00:25:04.319 "zone_management": false, 00:25:04.319 "zone_append": false, 00:25:04.319 "compare": true, 00:25:04.319 "compare_and_write": false, 00:25:04.319 "abort": true, 00:25:04.319 "seek_hole": false, 00:25:04.319 "seek_data": false, 00:25:04.319 "copy": true, 00:25:04.319 "nvme_iov_md": false 00:25:04.319 }, 00:25:04.319 "driver_specific": { 00:25:04.319 "nvme": [ 00:25:04.319 { 00:25:04.319 "pci_address": "0000:00:11.0", 00:25:04.319 "trid": { 00:25:04.319 "trtype": "PCIe", 00:25:04.319 "traddr": "0000:00:11.0" 00:25:04.319 }, 00:25:04.319 "ctrlr_data": { 00:25:04.319 "cntlid": 0, 00:25:04.319 "vendor_id": "0x1b36", 00:25:04.319 "model_number": "QEMU NVMe Ctrl", 00:25:04.319 "serial_number": "12341", 00:25:04.319 "firmware_revision": "8.0.0", 00:25:04.319 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:04.319 "oacs": { 00:25:04.319 "security": 0, 00:25:04.319 "format": 1, 00:25:04.319 "firmware": 0, 00:25:04.319 "ns_manage": 1 00:25:04.319 }, 00:25:04.319 "multi_ctrlr": false, 00:25:04.319 "ana_reporting": false 00:25:04.319 }, 00:25:04.319 "vs": { 00:25:04.319 "nvme_version": "1.4" 00:25:04.319 }, 00:25:04.319 "ns_data": { 00:25:04.319 "id": 1, 00:25:04.319 "can_share": false 00:25:04.319 } 00:25:04.319 } 00:25:04.319 ], 00:25:04.319 "mp_policy": "active_passive" 00:25:04.319 } 00:25:04.319 } 00:25:04.319 ]' 00:25:04.319 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:04.319 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:04.319 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:04.579 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:04.579 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:04.579 10:30:11 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:25:04.579 10:30:11 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:25:04.579 10:30:11 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:04.579 10:30:11 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:25:04.579 10:30:11 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:04.579 10:30:11 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:04.579 10:30:11 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=a4668ade-504e-4ad8-964c-f5e3c2b99b61 00:25:04.579 10:30:11 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:25:04.579 10:30:11 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4668ade-504e-4ad8-964c-f5e3c2b99b61 00:25:04.839 10:30:11 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:05.098 10:30:12 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=c940c363-2bb1-4496-ae60-53fd257d88bf 00:25:05.098 10:30:12 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c940c363-2bb1-4496-ae60-53fd257d88bf 00:25:05.357 10:30:12 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:05.357 10:30:12 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:05.357 10:30:12 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:25:05.357 10:30:12 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:05.357 10:30:12 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:05.357 10:30:12 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:25:05.357 10:30:12 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:05.357 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:05.358 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:05.358 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:05.358 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:05.358 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:05.616 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:05.616 { 00:25:05.616 "name": "ad3eeaf3-d085-4a2e-9b41-866189f8c779", 00:25:05.616 "aliases": [ 00:25:05.616 "lvs/nvme0n1p0" 00:25:05.616 ], 00:25:05.616 "product_name": "Logical Volume", 00:25:05.616 "block_size": 4096, 00:25:05.616 "num_blocks": 26476544, 00:25:05.616 "uuid": "ad3eeaf3-d085-4a2e-9b41-866189f8c779", 00:25:05.616 "assigned_rate_limits": { 00:25:05.616 "rw_ios_per_sec": 0, 00:25:05.616 "rw_mbytes_per_sec": 0, 00:25:05.616 "r_mbytes_per_sec": 0, 00:25:05.616 "w_mbytes_per_sec": 0 00:25:05.616 }, 00:25:05.616 "claimed": false, 00:25:05.616 "zoned": false, 00:25:05.616 "supported_io_types": { 00:25:05.616 "read": true, 00:25:05.616 "write": true, 00:25:05.616 "unmap": true, 00:25:05.616 "flush": false, 00:25:05.616 "reset": true, 00:25:05.616 "nvme_admin": false, 00:25:05.616 "nvme_io": false, 00:25:05.616 "nvme_io_md": false, 00:25:05.616 "write_zeroes": true, 00:25:05.616 "zcopy": false, 00:25:05.616 "get_zone_info": false, 00:25:05.616 "zone_management": false, 00:25:05.616 "zone_append": false, 00:25:05.616 "compare": false, 00:25:05.616 "compare_and_write": false, 00:25:05.616 "abort": false, 00:25:05.616 "seek_hole": true, 00:25:05.616 "seek_data": true, 00:25:05.616 "copy": false, 00:25:05.616 "nvme_iov_md": false 00:25:05.616 }, 00:25:05.616 "driver_specific": { 00:25:05.616 "lvol": { 00:25:05.616 "lvol_store_uuid": "c940c363-2bb1-4496-ae60-53fd257d88bf", 00:25:05.616 "base_bdev": "nvme0n1", 00:25:05.616 "thin_provision": true, 00:25:05.616 "num_allocated_clusters": 0, 00:25:05.616 "snapshot": false, 00:25:05.616 "clone": false, 00:25:05.616 "esnap_clone": false 00:25:05.616 } 00:25:05.616 } 00:25:05.616 } 00:25:05.616 ]' 00:25:05.616 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:05.616 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:05.616 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:05.616 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:05.616 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:05.616 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:05.616 10:30:12 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:25:05.616 10:30:12 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:25:05.616 10:30:12 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:05.876 10:30:12 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:05.876 10:30:12 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:05.876 10:30:12 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:05.876 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:05.876 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:05.876 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:05.876 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:05.876 10:30:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:06.135 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:06.135 { 00:25:06.135 "name": "ad3eeaf3-d085-4a2e-9b41-866189f8c779", 00:25:06.135 "aliases": [ 00:25:06.135 "lvs/nvme0n1p0" 00:25:06.135 ], 00:25:06.135 "product_name": "Logical Volume", 00:25:06.135 "block_size": 4096, 00:25:06.135 "num_blocks": 26476544, 00:25:06.135 "uuid": "ad3eeaf3-d085-4a2e-9b41-866189f8c779", 00:25:06.135 "assigned_rate_limits": { 00:25:06.135 "rw_ios_per_sec": 0, 00:25:06.135 "rw_mbytes_per_sec": 0, 00:25:06.135 "r_mbytes_per_sec": 0, 00:25:06.135 "w_mbytes_per_sec": 0 00:25:06.135 }, 00:25:06.135 "claimed": false, 00:25:06.135 "zoned": false, 00:25:06.135 "supported_io_types": { 00:25:06.135 "read": true, 00:25:06.135 "write": true, 00:25:06.135 "unmap": true, 00:25:06.135 "flush": false, 00:25:06.135 "reset": true, 00:25:06.135 "nvme_admin": false, 00:25:06.135 "nvme_io": false, 00:25:06.135 "nvme_io_md": false, 00:25:06.135 "write_zeroes": true, 00:25:06.135 "zcopy": false, 00:25:06.135 "get_zone_info": false, 00:25:06.135 "zone_management": false, 00:25:06.135 "zone_append": false, 00:25:06.135 "compare": false, 00:25:06.135 "compare_and_write": false, 00:25:06.135 "abort": false, 00:25:06.135 "seek_hole": true, 00:25:06.135 "seek_data": true, 00:25:06.135 "copy": false, 00:25:06.135 "nvme_iov_md": false 00:25:06.135 }, 00:25:06.135 "driver_specific": { 00:25:06.135 "lvol": { 00:25:06.135 "lvol_store_uuid": "c940c363-2bb1-4496-ae60-53fd257d88bf", 00:25:06.135 "base_bdev": "nvme0n1", 00:25:06.135 "thin_provision": true, 00:25:06.135 "num_allocated_clusters": 0, 00:25:06.135 "snapshot": false, 00:25:06.135 "clone": false, 00:25:06.135 "esnap_clone": false 00:25:06.135 } 00:25:06.135 } 00:25:06.135 } 00:25:06.135 ]' 00:25:06.135 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:06.135 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:06.135 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:06.135 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:06.135 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:06.135 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:06.135 10:30:13 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:25:06.135 10:30:13 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:06.394 10:30:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:25:06.394 10:30:13 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:25:06.394 10:30:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:06.394 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:06.394 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:06.394 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:06.394 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:06.394 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad3eeaf3-d085-4a2e-9b41-866189f8c779 00:25:06.654 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:06.654 { 00:25:06.654 "name": "ad3eeaf3-d085-4a2e-9b41-866189f8c779", 00:25:06.654 "aliases": [ 00:25:06.654 "lvs/nvme0n1p0" 00:25:06.654 ], 00:25:06.654 "product_name": "Logical Volume", 00:25:06.654 "block_size": 4096, 00:25:06.654 "num_blocks": 26476544, 00:25:06.654 "uuid": "ad3eeaf3-d085-4a2e-9b41-866189f8c779", 00:25:06.654 "assigned_rate_limits": { 00:25:06.654 "rw_ios_per_sec": 0, 00:25:06.654 "rw_mbytes_per_sec": 0, 00:25:06.654 "r_mbytes_per_sec": 0, 00:25:06.654 "w_mbytes_per_sec": 0 00:25:06.654 }, 00:25:06.654 "claimed": false, 00:25:06.654 "zoned": false, 00:25:06.654 "supported_io_types": { 00:25:06.654 "read": true, 00:25:06.654 "write": true, 00:25:06.654 "unmap": true, 00:25:06.654 "flush": false, 00:25:06.654 "reset": true, 00:25:06.654 "nvme_admin": false, 00:25:06.654 "nvme_io": false, 00:25:06.654 "nvme_io_md": false, 00:25:06.654 "write_zeroes": true, 00:25:06.654 "zcopy": false, 00:25:06.654 "get_zone_info": false, 00:25:06.654 "zone_management": false, 00:25:06.654 "zone_append": false, 00:25:06.654 "compare": false, 00:25:06.654 "compare_and_write": false, 00:25:06.654 "abort": false, 00:25:06.654 "seek_hole": true, 00:25:06.654 "seek_data": true, 00:25:06.654 "copy": false, 00:25:06.654 "nvme_iov_md": false 00:25:06.654 }, 00:25:06.654 "driver_specific": { 00:25:06.654 "lvol": { 00:25:06.654 "lvol_store_uuid": "c940c363-2bb1-4496-ae60-53fd257d88bf", 00:25:06.654 "base_bdev": "nvme0n1", 00:25:06.654 "thin_provision": true, 00:25:06.654 "num_allocated_clusters": 0, 00:25:06.654 "snapshot": false, 00:25:06.654 "clone": false, 00:25:06.654 "esnap_clone": false 00:25:06.654 } 00:25:06.654 } 00:25:06.654 } 00:25:06.654 ]' 00:25:06.654 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:06.654 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:06.654 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:06.654 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:06.654 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:06.654 10:30:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:06.654 10:30:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:25:06.655 10:30:13 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ad3eeaf3-d085-4a2e-9b41-866189f8c779 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:25:06.915 [2024-11-25 10:30:13.864469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.864527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:06.915 [2024-11-25 10:30:13.864549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:06.915 [2024-11-25 10:30:13.864560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.867893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.867936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:06.915 [2024-11-25 10:30:13.867952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.301 ms 00:25:06.915 [2024-11-25 10:30:13.867962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.868116] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:06.915 [2024-11-25 10:30:13.869073] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:06.915 [2024-11-25 10:30:13.869234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.869249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:06.915 [2024-11-25 10:30:13.869272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.124 ms 00:25:06.915 [2024-11-25 10:30:13.869283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.869522] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7dfaaa21-aae5-4940-8ecb-2cbd33e49460 00:25:06.915 [2024-11-25 10:30:13.870905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.870941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:06.915 [2024-11-25 10:30:13.870954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:06.915 [2024-11-25 10:30:13.870966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.878327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.878365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:06.915 [2024-11-25 10:30:13.878377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.292 ms 00:25:06.915 [2024-11-25 10:30:13.878392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.878579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.878598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:06.915 [2024-11-25 10:30:13.878609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:06.915 [2024-11-25 10:30:13.878626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.878664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.878678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:06.915 [2024-11-25 10:30:13.878689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:06.915 [2024-11-25 10:30:13.878704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.878743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:06.915 [2024-11-25 10:30:13.883968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.884116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:06.915 [2024-11-25 10:30:13.884141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:25:06.915 [2024-11-25 10:30:13.884151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.884231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.884259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:06.915 [2024-11-25 10:30:13.884273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:06.915 [2024-11-25 10:30:13.884283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.884316] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:06.915 [2024-11-25 10:30:13.884440] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:06.915 [2024-11-25 10:30:13.884460] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:06.915 [2024-11-25 10:30:13.884474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:06.915 [2024-11-25 10:30:13.884489] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:06.915 [2024-11-25 10:30:13.884512] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:06.915 [2024-11-25 10:30:13.884526] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:06.915 [2024-11-25 10:30:13.884536] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:06.915 [2024-11-25 10:30:13.884559] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:06.915 [2024-11-25 10:30:13.884569] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:06.915 [2024-11-25 10:30:13.884582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.884592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:06.915 [2024-11-25 10:30:13.884607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:25:06.915 [2024-11-25 10:30:13.884617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.884706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.915 [2024-11-25 10:30:13.884717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:06.915 [2024-11-25 10:30:13.884729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:06.915 [2024-11-25 10:30:13.884739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.915 [2024-11-25 10:30:13.884854] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:06.915 [2024-11-25 10:30:13.884866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:06.915 [2024-11-25 10:30:13.884879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:06.915 [2024-11-25 10:30:13.884890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.915 [2024-11-25 10:30:13.884903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:06.915 [2024-11-25 10:30:13.884912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:06.915 [2024-11-25 10:30:13.884924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:06.915 [2024-11-25 10:30:13.884933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:06.916 [2024-11-25 10:30:13.884945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:06.916 [2024-11-25 10:30:13.884954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:06.916 [2024-11-25 10:30:13.884966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:06.916 [2024-11-25 10:30:13.884975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:06.916 [2024-11-25 10:30:13.884988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:06.916 [2024-11-25 10:30:13.884997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:06.916 [2024-11-25 10:30:13.885011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:06.916 [2024-11-25 10:30:13.885020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.916 [2024-11-25 10:30:13.885035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:06.916 [2024-11-25 10:30:13.885044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:06.916 [2024-11-25 10:30:13.885057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.916 [2024-11-25 10:30:13.885066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:06.916 [2024-11-25 10:30:13.885078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:06.916 [2024-11-25 10:30:13.885087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.916 [2024-11-25 10:30:13.885098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:06.916 [2024-11-25 10:30:13.885108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:06.916 [2024-11-25 10:30:13.885119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.916 [2024-11-25 10:30:13.885129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:06.916 [2024-11-25 10:30:13.885140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:06.916 [2024-11-25 10:30:13.885149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.916 [2024-11-25 10:30:13.885160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:06.916 [2024-11-25 10:30:13.885169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:06.916 [2024-11-25 10:30:13.885181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:06.916 [2024-11-25 10:30:13.885190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:06.916 [2024-11-25 10:30:13.885204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:06.916 [2024-11-25 10:30:13.885213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:06.916 [2024-11-25 10:30:13.885224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:06.916 [2024-11-25 10:30:13.885233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:06.916 [2024-11-25 10:30:13.885244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:06.916 [2024-11-25 10:30:13.885253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:06.916 [2024-11-25 10:30:13.885274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:06.916 [2024-11-25 10:30:13.885284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.916 [2024-11-25 10:30:13.885295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:06.916 [2024-11-25 10:30:13.885304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:06.916 [2024-11-25 10:30:13.885316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.916 [2024-11-25 10:30:13.885325] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:06.916 [2024-11-25 10:30:13.885337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:06.916 [2024-11-25 10:30:13.885348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:06.916 [2024-11-25 10:30:13.885374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:06.916 [2024-11-25 10:30:13.885388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:06.916 [2024-11-25 10:30:13.885403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:06.916 [2024-11-25 10:30:13.885413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:06.916 [2024-11-25 10:30:13.885425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:06.916 [2024-11-25 10:30:13.885434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:06.916 [2024-11-25 10:30:13.885446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:06.916 [2024-11-25 10:30:13.885460] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:06.916 [2024-11-25 10:30:13.885475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:06.916 [2024-11-25 10:30:13.885507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:06.916 [2024-11-25 10:30:13.885522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:06.916 [2024-11-25 10:30:13.885532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:06.916 [2024-11-25 10:30:13.885545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:06.916 [2024-11-25 10:30:13.885556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:06.916 [2024-11-25 10:30:13.885569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:06.916 [2024-11-25 10:30:13.885579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:06.916 [2024-11-25 10:30:13.885592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:06.916 [2024-11-25 10:30:13.885602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:06.916 [2024-11-25 10:30:13.885617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:06.916 [2024-11-25 10:30:13.885628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:06.916 [2024-11-25 10:30:13.885642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:06.916 [2024-11-25 10:30:13.885652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:06.916 [2024-11-25 10:30:13.885665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:06.916 [2024-11-25 10:30:13.885675] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:06.916 [2024-11-25 10:30:13.885688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:06.916 [2024-11-25 10:30:13.885699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:06.916 [2024-11-25 10:30:13.885713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:06.916 [2024-11-25 10:30:13.885723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:06.916 [2024-11-25 10:30:13.885737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:06.916 [2024-11-25 10:30:13.885749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.916 [2024-11-25 10:30:13.885769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:06.916 [2024-11-25 10:30:13.885780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:25:06.916 [2024-11-25 10:30:13.885792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.916 [2024-11-25 10:30:13.885876] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:06.916 [2024-11-25 10:30:13.885894] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:13.518 [2024-11-25 10:30:19.866682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:19.866751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:13.518 [2024-11-25 10:30:19.866768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5990.511 ms 00:25:13.518 [2024-11-25 10:30:19.866781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:19.904136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:19.904379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:13.518 [2024-11-25 10:30:19.904405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.063 ms 00:25:13.518 [2024-11-25 10:30:19.904420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:19.904639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:19.904660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:13.518 [2024-11-25 10:30:19.904689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:13.518 [2024-11-25 10:30:19.904711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:19.965695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:19.965749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:13.518 [2024-11-25 10:30:19.965765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.041 ms 00:25:13.518 [2024-11-25 10:30:19.965780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:19.965888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:19.965906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:13.518 [2024-11-25 10:30:19.965919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:13.518 [2024-11-25 10:30:19.965951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:19.966405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:19.966427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:13.518 [2024-11-25 10:30:19.966440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:25:13.518 [2024-11-25 10:30:19.966454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:19.966629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:19.966650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:13.518 [2024-11-25 10:30:19.966681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:25:13.518 [2024-11-25 10:30:19.966699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:19.987613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:19.987816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:13.518 [2024-11-25 10:30:19.987839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.911 ms 00:25:13.518 [2024-11-25 10:30:19.987852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.000890] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:13.518 [2024-11-25 10:30:20.017855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.018111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:13.518 [2024-11-25 10:30:20.018143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.918 ms 00:25:13.518 [2024-11-25 10:30:20.018154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.186783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.186843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:13.518 [2024-11-25 10:30:20.186862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 168.756 ms 00:25:13.518 [2024-11-25 10:30:20.186875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.187094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.187108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:13.518 [2024-11-25 10:30:20.187125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:25:13.518 [2024-11-25 10:30:20.187135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.225110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.225296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:13.518 [2024-11-25 10:30:20.225325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.994 ms 00:25:13.518 [2024-11-25 10:30:20.225340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.262571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.262615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:13.518 [2024-11-25 10:30:20.262634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.190 ms 00:25:13.518 [2024-11-25 10:30:20.262644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.263336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.263359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:13.518 [2024-11-25 10:30:20.263374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:25:13.518 [2024-11-25 10:30:20.263384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.388144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.388381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:13.518 [2024-11-25 10:30:20.388414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 124.920 ms 00:25:13.518 [2024-11-25 10:30:20.388426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.425862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.425911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:13.518 [2024-11-25 10:30:20.425929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.362 ms 00:25:13.518 [2024-11-25 10:30:20.425940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.463446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.463662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:13.518 [2024-11-25 10:30:20.463691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.482 ms 00:25:13.518 [2024-11-25 10:30:20.463702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.500171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.500227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:13.518 [2024-11-25 10:30:20.500245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.441 ms 00:25:13.518 [2024-11-25 10:30:20.500255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.500332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.500344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:13.518 [2024-11-25 10:30:20.500361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:13.518 [2024-11-25 10:30:20.500371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.500457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.518 [2024-11-25 10:30:20.500468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:13.518 [2024-11-25 10:30:20.500481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:13.518 [2024-11-25 10:30:20.500514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.518 [2024-11-25 10:30:20.501443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:13.518 [2024-11-25 10:30:20.505666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6647.453 ms, result 0 00:25:13.518 [2024-11-25 10:30:20.506615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:25:13.518 "name": "ftl0", 00:25:13.518 "uuid": "7dfaaa21-aae5-4940-8ecb-2cbd33e49460" 00:25:13.518 } 00:25:13.518 p_thread 00:25:13.518 10:30:20 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:25:13.518 10:30:20 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:13.518 10:30:20 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:13.519 10:30:20 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:25:13.519 10:30:20 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:13.519 10:30:20 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:13.519 10:30:20 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:13.778 10:30:20 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:14.037 [ 00:25:14.037 { 00:25:14.037 "name": "ftl0", 00:25:14.037 "aliases": [ 00:25:14.037 "7dfaaa21-aae5-4940-8ecb-2cbd33e49460" 00:25:14.037 ], 00:25:14.037 "product_name": "FTL disk", 00:25:14.037 "block_size": 4096, 00:25:14.037 "num_blocks": 23592960, 00:25:14.037 "uuid": "7dfaaa21-aae5-4940-8ecb-2cbd33e49460", 00:25:14.037 "assigned_rate_limits": { 00:25:14.037 "rw_ios_per_sec": 0, 00:25:14.037 "rw_mbytes_per_sec": 0, 00:25:14.037 "r_mbytes_per_sec": 0, 00:25:14.037 "w_mbytes_per_sec": 0 00:25:14.037 }, 00:25:14.037 "claimed": false, 00:25:14.037 "zoned": false, 00:25:14.037 "supported_io_types": { 00:25:14.037 "read": true, 00:25:14.037 "write": true, 00:25:14.037 "unmap": true, 00:25:14.037 "flush": true, 00:25:14.037 "reset": false, 00:25:14.037 "nvme_admin": false, 00:25:14.037 "nvme_io": false, 00:25:14.037 "nvme_io_md": false, 00:25:14.037 "write_zeroes": true, 00:25:14.037 "zcopy": false, 00:25:14.037 "get_zone_info": false, 00:25:14.037 "zone_management": false, 00:25:14.037 "zone_append": false, 00:25:14.037 "compare": false, 00:25:14.037 "compare_and_write": false, 00:25:14.037 "abort": false, 00:25:14.037 "seek_hole": false, 00:25:14.037 "seek_data": false, 00:25:14.037 "copy": false, 00:25:14.037 "nvme_iov_md": false 00:25:14.037 }, 00:25:14.037 "driver_specific": { 00:25:14.037 "ftl": { 00:25:14.037 "base_bdev": "ad3eeaf3-d085-4a2e-9b41-866189f8c779", 00:25:14.037 "cache": "nvc0n1p0" 00:25:14.037 } 00:25:14.037 } 00:25:14.037 } 00:25:14.037 ] 00:25:14.037 10:30:20 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:25:14.037 10:30:20 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:25:14.037 10:30:20 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:14.296 10:30:21 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:25:14.296 10:30:21 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:25:14.296 10:30:21 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:25:14.296 { 00:25:14.296 "name": "ftl0", 00:25:14.296 "aliases": [ 00:25:14.296 "7dfaaa21-aae5-4940-8ecb-2cbd33e49460" 00:25:14.296 ], 00:25:14.296 "product_name": "FTL disk", 00:25:14.296 "block_size": 4096, 00:25:14.296 "num_blocks": 23592960, 00:25:14.296 "uuid": "7dfaaa21-aae5-4940-8ecb-2cbd33e49460", 00:25:14.296 "assigned_rate_limits": { 00:25:14.296 "rw_ios_per_sec": 0, 00:25:14.296 "rw_mbytes_per_sec": 0, 00:25:14.296 "r_mbytes_per_sec": 0, 00:25:14.296 "w_mbytes_per_sec": 0 00:25:14.296 }, 00:25:14.296 "claimed": false, 00:25:14.296 "zoned": false, 00:25:14.296 "supported_io_types": { 00:25:14.296 "read": true, 00:25:14.296 "write": true, 00:25:14.296 "unmap": true, 00:25:14.296 "flush": true, 00:25:14.296 "reset": false, 00:25:14.296 "nvme_admin": false, 00:25:14.296 "nvme_io": false, 00:25:14.296 "nvme_io_md": false, 00:25:14.296 "write_zeroes": true, 00:25:14.296 "zcopy": false, 00:25:14.296 "get_zone_info": false, 00:25:14.296 "zone_management": false, 00:25:14.296 "zone_append": false, 00:25:14.296 "compare": false, 00:25:14.296 "compare_and_write": false, 00:25:14.296 "abort": false, 00:25:14.296 "seek_hole": false, 00:25:14.296 "seek_data": false, 00:25:14.296 "copy": false, 00:25:14.296 "nvme_iov_md": false 00:25:14.296 }, 00:25:14.296 "driver_specific": { 00:25:14.296 "ftl": { 00:25:14.296 "base_bdev": "ad3eeaf3-d085-4a2e-9b41-866189f8c779", 00:25:14.296 "cache": "nvc0n1p0" 00:25:14.296 } 00:25:14.296 } 00:25:14.296 } 00:25:14.296 ]' 00:25:14.296 10:30:21 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:25:14.555 10:30:21 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:25:14.555 10:30:21 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:14.555 [2024-11-25 10:30:21.625919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.556 [2024-11-25 10:30:21.625985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:14.556 [2024-11-25 10:30:21.626002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:14.556 [2024-11-25 10:30:21.626019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.556 [2024-11-25 10:30:21.626060] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:14.556 [2024-11-25 10:30:21.630353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.556 [2024-11-25 10:30:21.630397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:14.556 [2024-11-25 10:30:21.630417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.274 ms 00:25:14.556 [2024-11-25 10:30:21.630428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.556 [2024-11-25 10:30:21.631042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.556 [2024-11-25 10:30:21.631071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:14.556 [2024-11-25 10:30:21.631086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:25:14.556 [2024-11-25 10:30:21.631096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.556 [2024-11-25 10:30:21.633941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.556 [2024-11-25 10:30:21.633973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:14.556 [2024-11-25 10:30:21.633987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:25:14.556 [2024-11-25 10:30:21.633998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.556 [2024-11-25 10:30:21.639697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.556 [2024-11-25 10:30:21.639735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:14.556 [2024-11-25 10:30:21.639750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.639 ms 00:25:14.556 [2024-11-25 10:30:21.639760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.816 [2024-11-25 10:30:21.677949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.816 [2024-11-25 10:30:21.678008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:14.816 [2024-11-25 10:30:21.678031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.151 ms 00:25:14.816 [2024-11-25 10:30:21.678042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.816 [2024-11-25 10:30:21.699651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.816 [2024-11-25 10:30:21.699828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:14.816 [2024-11-25 10:30:21.699861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.532 ms 00:25:14.816 [2024-11-25 10:30:21.699872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.816 [2024-11-25 10:30:21.700090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.816 [2024-11-25 10:30:21.700120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:14.816 [2024-11-25 10:30:21.700134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:25:14.816 [2024-11-25 10:30:21.700145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.816 [2024-11-25 10:30:21.736254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.816 [2024-11-25 10:30:21.736294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:14.816 [2024-11-25 10:30:21.736311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.134 ms 00:25:14.816 [2024-11-25 10:30:21.736321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.816 [2024-11-25 10:30:21.771792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.816 [2024-11-25 10:30:21.771830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:14.816 [2024-11-25 10:30:21.771850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.417 ms 00:25:14.816 [2024-11-25 10:30:21.771861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.816 [2024-11-25 10:30:21.807830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.816 [2024-11-25 10:30:21.807872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:14.816 [2024-11-25 10:30:21.807889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.934 ms 00:25:14.816 [2024-11-25 10:30:21.807899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.816 [2024-11-25 10:30:21.843893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.816 [2024-11-25 10:30:21.843934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:14.816 [2024-11-25 10:30:21.843950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.899 ms 00:25:14.816 [2024-11-25 10:30:21.843961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.816 [2024-11-25 10:30:21.844069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:14.816 [2024-11-25 10:30:21.844088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:14.816 [2024-11-25 10:30:21.844443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.844992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:14.817 [2024-11-25 10:30:21.845364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:14.818 [2024-11-25 10:30:21.845377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:14.818 [2024-11-25 10:30:21.845395] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:14.818 [2024-11-25 10:30:21.845410] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7dfaaa21-aae5-4940-8ecb-2cbd33e49460 00:25:14.818 [2024-11-25 10:30:21.845422] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:14.818 [2024-11-25 10:30:21.845434] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:14.818 [2024-11-25 10:30:21.845444] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:14.818 [2024-11-25 10:30:21.845460] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:14.818 [2024-11-25 10:30:21.845469] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:14.818 [2024-11-25 10:30:21.845482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:14.818 [2024-11-25 10:30:21.845500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:14.818 [2024-11-25 10:30:21.845513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:14.818 [2024-11-25 10:30:21.845522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:14.818 [2024-11-25 10:30:21.845537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.818 [2024-11-25 10:30:21.845547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:14.818 [2024-11-25 10:30:21.845561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:25:14.818 [2024-11-25 10:30:21.845571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.818 [2024-11-25 10:30:21.865443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.818 [2024-11-25 10:30:21.865488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:14.818 [2024-11-25 10:30:21.865536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.862 ms 00:25:14.818 [2024-11-25 10:30:21.865554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.818 [2024-11-25 10:30:21.866174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.818 [2024-11-25 10:30:21.866194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:14.818 [2024-11-25 10:30:21.866208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:25:14.818 [2024-11-25 10:30:21.866218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:21.934062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:21.934123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:15.078 [2024-11-25 10:30:21.934140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:21.934151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:21.934293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:21.934306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:15.078 [2024-11-25 10:30:21.934319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:21.934329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:21.934406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:21.934422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:15.078 [2024-11-25 10:30:21.934439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:21.934449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:21.934483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:21.934528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:15.078 [2024-11-25 10:30:21.934552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:21.934569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:22.068215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:22.068277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:15.078 [2024-11-25 10:30:22.068295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:22.068306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:22.171032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:22.171094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:15.078 [2024-11-25 10:30:22.171112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:22.171123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:22.171270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:22.171282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:15.078 [2024-11-25 10:30:22.171303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:22.171313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:22.171374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:22.171385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:15.078 [2024-11-25 10:30:22.171398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:22.171408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:22.171573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:22.171590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:15.078 [2024-11-25 10:30:22.171604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:22.171617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:22.171683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:22.171696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:15.078 [2024-11-25 10:30:22.171709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:22.171719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:22.171780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:22.171791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:15.078 [2024-11-25 10:30:22.171806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:22.171816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:22.171877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.078 [2024-11-25 10:30:22.171889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:15.078 [2024-11-25 10:30:22.171901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.078 [2024-11-25 10:30:22.171924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.078 [2024-11-25 10:30:22.172123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 547.077 ms, result 0 00:25:15.078 true 00:25:15.337 10:30:22 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78081 00:25:15.337 10:30:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78081 ']' 00:25:15.337 10:30:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78081 00:25:15.337 10:30:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:15.338 10:30:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:15.338 10:30:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78081 00:25:15.338 killing process with pid 78081 00:25:15.338 10:30:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:15.338 10:30:22 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:15.338 10:30:22 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78081' 00:25:15.338 10:30:22 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78081 00:25:15.338 10:30:22 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78081 00:25:20.614 10:30:27 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:25:21.182 65536+0 records in 00:25:21.182 65536+0 records out 00:25:21.182 268435456 bytes (268 MB, 256 MiB) copied, 1.03907 s, 258 MB/s 00:25:21.182 10:30:28 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:21.441 [2024-11-25 10:30:28.341687] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:25:21.441 [2024-11-25 10:30:28.341807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78311 ] 00:25:21.441 [2024-11-25 10:30:28.523483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.700 [2024-11-25 10:30:28.643830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.959 [2024-11-25 10:30:28.997971] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:21.959 [2024-11-25 10:30:28.998283] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:22.220 [2024-11-25 10:30:29.161291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.220 [2024-11-25 10:30:29.161608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:22.220 [2024-11-25 10:30:29.161636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:22.220 [2024-11-25 10:30:29.161649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.220 [2024-11-25 10:30:29.165188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.220 [2024-11-25 10:30:29.165366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:22.220 [2024-11-25 10:30:29.165390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.511 ms 00:25:22.220 [2024-11-25 10:30:29.165402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.220 [2024-11-25 10:30:29.165590] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:22.220 [2024-11-25 10:30:29.166608] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:22.220 [2024-11-25 10:30:29.166647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.220 [2024-11-25 10:30:29.166659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:22.220 [2024-11-25 10:30:29.166671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:25:22.220 [2024-11-25 10:30:29.166681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.220 [2024-11-25 10:30:29.168217] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:22.220 [2024-11-25 10:30:29.188388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.220 [2024-11-25 10:30:29.188450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:22.220 [2024-11-25 10:30:29.188466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.203 ms 00:25:22.220 [2024-11-25 10:30:29.188477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.220 [2024-11-25 10:30:29.188632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.220 [2024-11-25 10:30:29.188648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:22.220 [2024-11-25 10:30:29.188659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:22.220 [2024-11-25 10:30:29.188669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.220 [2024-11-25 10:30:29.196089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.220 [2024-11-25 10:30:29.196134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:22.220 [2024-11-25 10:30:29.196148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.384 ms 00:25:22.220 [2024-11-25 10:30:29.196159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.220 [2024-11-25 10:30:29.196281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.220 [2024-11-25 10:30:29.196298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:22.220 [2024-11-25 10:30:29.196310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:22.220 [2024-11-25 10:30:29.196320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.220 [2024-11-25 10:30:29.196354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.220 [2024-11-25 10:30:29.196365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:22.220 [2024-11-25 10:30:29.196376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:22.220 [2024-11-25 10:30:29.196386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.220 [2024-11-25 10:30:29.196412] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:22.220 [2024-11-25 10:30:29.201168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.220 [2024-11-25 10:30:29.201208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:22.220 [2024-11-25 10:30:29.201221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:25:22.220 [2024-11-25 10:30:29.201231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.220 [2024-11-25 10:30:29.201328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.220 [2024-11-25 10:30:29.201342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:22.220 [2024-11-25 10:30:29.201353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:22.220 [2024-11-25 10:30:29.201364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.220 [2024-11-25 10:30:29.201393] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:22.220 [2024-11-25 10:30:29.201417] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:22.220 [2024-11-25 10:30:29.201453] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:22.220 [2024-11-25 10:30:29.201471] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:22.220 [2024-11-25 10:30:29.201577] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:22.220 [2024-11-25 10:30:29.201593] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:22.220 [2024-11-25 10:30:29.201606] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:22.220 [2024-11-25 10:30:29.201623] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:22.221 [2024-11-25 10:30:29.201635] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:22.221 [2024-11-25 10:30:29.201647] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:22.221 [2024-11-25 10:30:29.201656] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:22.221 [2024-11-25 10:30:29.201666] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:22.221 [2024-11-25 10:30:29.201677] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:22.221 [2024-11-25 10:30:29.201687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.221 [2024-11-25 10:30:29.201697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:22.221 [2024-11-25 10:30:29.201708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:25:22.221 [2024-11-25 10:30:29.201718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.221 [2024-11-25 10:30:29.201796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.221 [2024-11-25 10:30:29.201811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:22.221 [2024-11-25 10:30:29.201820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:22.221 [2024-11-25 10:30:29.201831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.221 [2024-11-25 10:30:29.201923] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:22.221 [2024-11-25 10:30:29.201937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:22.221 [2024-11-25 10:30:29.201948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:22.221 [2024-11-25 10:30:29.201959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.221 [2024-11-25 10:30:29.201969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:22.221 [2024-11-25 10:30:29.201985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:22.221 [2024-11-25 10:30:29.202014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:22.221 [2024-11-25 10:30:29.202028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:22.221 [2024-11-25 10:30:29.202061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:22.221 [2024-11-25 10:30:29.202089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:22.221 [2024-11-25 10:30:29.202099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:22.221 [2024-11-25 10:30:29.202108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:22.221 [2024-11-25 10:30:29.202118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:22.221 [2024-11-25 10:30:29.202127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:22.221 [2024-11-25 10:30:29.202146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:22.221 [2024-11-25 10:30:29.202155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:22.221 [2024-11-25 10:30:29.202174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.221 [2024-11-25 10:30:29.202192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:22.221 [2024-11-25 10:30:29.202201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.221 [2024-11-25 10:30:29.202219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:22.221 [2024-11-25 10:30:29.202228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.221 [2024-11-25 10:30:29.202245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:22.221 [2024-11-25 10:30:29.202254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:22.221 [2024-11-25 10:30:29.202272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:22.221 [2024-11-25 10:30:29.202281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:22.221 [2024-11-25 10:30:29.202299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:22.221 [2024-11-25 10:30:29.202313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:22.221 [2024-11-25 10:30:29.202332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:22.221 [2024-11-25 10:30:29.202348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:22.221 [2024-11-25 10:30:29.202364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:22.221 [2024-11-25 10:30:29.202380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:22.221 [2024-11-25 10:30:29.202412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:22.221 [2024-11-25 10:30:29.202424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202433] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:22.221 [2024-11-25 10:30:29.202444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:22.221 [2024-11-25 10:30:29.202464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:22.221 [2024-11-25 10:30:29.202480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:22.221 [2024-11-25 10:30:29.202506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:22.221 [2024-11-25 10:30:29.202516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:22.221 [2024-11-25 10:30:29.202525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:22.221 [2024-11-25 10:30:29.202534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:22.221 [2024-11-25 10:30:29.202543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:22.221 [2024-11-25 10:30:29.202553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:22.221 [2024-11-25 10:30:29.202564] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:22.221 [2024-11-25 10:30:29.202577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:22.221 [2024-11-25 10:30:29.202589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:22.221 [2024-11-25 10:30:29.202600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:22.221 [2024-11-25 10:30:29.202610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:22.221 [2024-11-25 10:30:29.202621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:22.221 [2024-11-25 10:30:29.202631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:22.221 [2024-11-25 10:30:29.202641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:22.221 [2024-11-25 10:30:29.202651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:22.221 [2024-11-25 10:30:29.202665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:22.221 [2024-11-25 10:30:29.202681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:22.221 [2024-11-25 10:30:29.202700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:22.221 [2024-11-25 10:30:29.202711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:22.221 [2024-11-25 10:30:29.202721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:22.221 [2024-11-25 10:30:29.202732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:22.221 [2024-11-25 10:30:29.202742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:22.222 [2024-11-25 10:30:29.202753] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:22.222 [2024-11-25 10:30:29.202764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:22.222 [2024-11-25 10:30:29.202775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:22.222 [2024-11-25 10:30:29.202785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:22.222 [2024-11-25 10:30:29.202795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:22.222 [2024-11-25 10:30:29.202806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:22.222 [2024-11-25 10:30:29.202817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.222 [2024-11-25 10:30:29.202833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:22.222 [2024-11-25 10:30:29.202852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:25:22.222 [2024-11-25 10:30:29.202868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.222 [2024-11-25 10:30:29.242550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.222 [2024-11-25 10:30:29.242851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:22.222 [2024-11-25 10:30:29.242880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.665 ms 00:25:22.222 [2024-11-25 10:30:29.242892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.222 [2024-11-25 10:30:29.243070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.222 [2024-11-25 10:30:29.243083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:22.222 [2024-11-25 10:30:29.243095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:22.222 [2024-11-25 10:30:29.243104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.222 [2024-11-25 10:30:29.302575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.222 [2024-11-25 10:30:29.302638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:22.222 [2024-11-25 10:30:29.302658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.542 ms 00:25:22.222 [2024-11-25 10:30:29.302668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.222 [2024-11-25 10:30:29.302808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.222 [2024-11-25 10:30:29.302821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:22.222 [2024-11-25 10:30:29.302832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:22.222 [2024-11-25 10:30:29.302842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.222 [2024-11-25 10:30:29.303284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.222 [2024-11-25 10:30:29.303297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:22.222 [2024-11-25 10:30:29.303308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:25:22.222 [2024-11-25 10:30:29.303324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.222 [2024-11-25 10:30:29.303442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.222 [2024-11-25 10:30:29.303456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:22.222 [2024-11-25 10:30:29.303467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:22.222 [2024-11-25 10:30:29.303476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.222 [2024-11-25 10:30:29.321850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.222 [2024-11-25 10:30:29.322092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:22.222 [2024-11-25 10:30:29.322118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.352 ms 00:25:22.222 [2024-11-25 10:30:29.322129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.481 [2024-11-25 10:30:29.341812] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:22.481 [2024-11-25 10:30:29.341896] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:22.481 [2024-11-25 10:30:29.341914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.481 [2024-11-25 10:30:29.341926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:22.481 [2024-11-25 10:30:29.341939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.671 ms 00:25:22.481 [2024-11-25 10:30:29.341950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.481 [2024-11-25 10:30:29.372560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.481 [2024-11-25 10:30:29.372800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:22.481 [2024-11-25 10:30:29.372827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.545 ms 00:25:22.481 [2024-11-25 10:30:29.372839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.481 [2024-11-25 10:30:29.391299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.481 [2024-11-25 10:30:29.391343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:22.481 [2024-11-25 10:30:29.391357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.380 ms 00:25:22.481 [2024-11-25 10:30:29.391367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.481 [2024-11-25 10:30:29.409190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.481 [2024-11-25 10:30:29.409231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:22.481 [2024-11-25 10:30:29.409245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.764 ms 00:25:22.481 [2024-11-25 10:30:29.409255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.481 [2024-11-25 10:30:29.410066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.481 [2024-11-25 10:30:29.410102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:22.481 [2024-11-25 10:30:29.410115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:25:22.481 [2024-11-25 10:30:29.410125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.481 [2024-11-25 10:30:29.495793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.481 [2024-11-25 10:30:29.495855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:22.481 [2024-11-25 10:30:29.495871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.775 ms 00:25:22.481 [2024-11-25 10:30:29.495883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.481 [2024-11-25 10:30:29.506911] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:22.481 [2024-11-25 10:30:29.523185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.481 [2024-11-25 10:30:29.523236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:22.481 [2024-11-25 10:30:29.523251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.225 ms 00:25:22.482 [2024-11-25 10:30:29.523263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.482 [2024-11-25 10:30:29.523396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.482 [2024-11-25 10:30:29.523410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:22.482 [2024-11-25 10:30:29.523422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:22.482 [2024-11-25 10:30:29.523432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.482 [2024-11-25 10:30:29.523487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.482 [2024-11-25 10:30:29.523523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:22.482 [2024-11-25 10:30:29.523535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:22.482 [2024-11-25 10:30:29.523545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.482 [2024-11-25 10:30:29.523575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.482 [2024-11-25 10:30:29.523590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:22.482 [2024-11-25 10:30:29.523600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:22.482 [2024-11-25 10:30:29.523610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.482 [2024-11-25 10:30:29.523642] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:22.482 [2024-11-25 10:30:29.523655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.482 [2024-11-25 10:30:29.523665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:22.482 [2024-11-25 10:30:29.523676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:22.482 [2024-11-25 10:30:29.523686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.482 [2024-11-25 10:30:29.559925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.482 [2024-11-25 10:30:29.559969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:22.482 [2024-11-25 10:30:29.559983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.274 ms 00:25:22.482 [2024-11-25 10:30:29.559994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.482 [2024-11-25 10:30:29.560111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.482 [2024-11-25 10:30:29.560125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:22.482 [2024-11-25 10:30:29.560136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:22.482 [2024-11-25 10:30:29.560146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.482 [2024-11-25 10:30:29.561094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:22.482 [2024-11-25 10:30:29.565417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 400.111 ms, result 0 00:25:22.482 [2024-11-25 10:30:29.566269] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:22.482 [2024-11-25 10:30:29.585075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:23.905  [2024-11-25T10:30:31.955Z] Copying: 26/256 [MB] (26 MBps) [2024-11-25T10:30:32.892Z] Copying: 53/256 [MB] (26 MBps) [2024-11-25T10:30:33.829Z] Copying: 79/256 [MB] (26 MBps) [2024-11-25T10:30:34.765Z] Copying: 105/256 [MB] (26 MBps) [2024-11-25T10:30:35.702Z] Copying: 131/256 [MB] (26 MBps) [2024-11-25T10:30:36.648Z] Copying: 159/256 [MB] (27 MBps) [2024-11-25T10:30:37.586Z] Copying: 186/256 [MB] (26 MBps) [2024-11-25T10:30:38.965Z] Copying: 213/256 [MB] (27 MBps) [2024-11-25T10:30:39.224Z] Copying: 239/256 [MB] (26 MBps) [2024-11-25T10:30:39.224Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-25 10:30:39.199991] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:32.112 [2024-11-25 10:30:39.214746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.112 [2024-11-25 10:30:39.214790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:32.112 [2024-11-25 10:30:39.214806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:32.112 [2024-11-25 10:30:39.214816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.112 [2024-11-25 10:30:39.214846] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:32.112 [2024-11-25 10:30:39.219110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.112 [2024-11-25 10:30:39.219159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:32.112 [2024-11-25 10:30:39.219173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.254 ms 00:25:32.112 [2024-11-25 10:30:39.219183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.112 [2024-11-25 10:30:39.221066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.112 [2024-11-25 10:30:39.221106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:32.112 [2024-11-25 10:30:39.221129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:25:32.112 [2024-11-25 10:30:39.221139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.372 [2024-11-25 10:30:39.228042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.372 [2024-11-25 10:30:39.228083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:32.372 [2024-11-25 10:30:39.228102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.895 ms 00:25:32.372 [2024-11-25 10:30:39.228112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.372 [2024-11-25 10:30:39.233800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.372 [2024-11-25 10:30:39.233837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:32.372 [2024-11-25 10:30:39.233849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.659 ms 00:25:32.372 [2024-11-25 10:30:39.233859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.372 [2024-11-25 10:30:39.270711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.372 [2024-11-25 10:30:39.270750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:32.372 [2024-11-25 10:30:39.270763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.865 ms 00:25:32.372 [2024-11-25 10:30:39.270773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.372 [2024-11-25 10:30:39.291176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.372 [2024-11-25 10:30:39.291222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:32.372 [2024-11-25 10:30:39.291235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.381 ms 00:25:32.372 [2024-11-25 10:30:39.291249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.372 [2024-11-25 10:30:39.291378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.372 [2024-11-25 10:30:39.291391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:32.372 [2024-11-25 10:30:39.291402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:32.372 [2024-11-25 10:30:39.291423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.372 [2024-11-25 10:30:39.328092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.372 [2024-11-25 10:30:39.328143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:32.372 [2024-11-25 10:30:39.328157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.710 ms 00:25:32.372 [2024-11-25 10:30:39.328167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.372 [2024-11-25 10:30:39.365091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.372 [2024-11-25 10:30:39.365130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:32.372 [2024-11-25 10:30:39.365143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.914 ms 00:25:32.372 [2024-11-25 10:30:39.365153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.372 [2024-11-25 10:30:39.401240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.372 [2024-11-25 10:30:39.401286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:32.372 [2024-11-25 10:30:39.401300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.091 ms 00:25:32.372 [2024-11-25 10:30:39.401309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.372 [2024-11-25 10:30:39.436858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.372 [2024-11-25 10:30:39.436897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:32.372 [2024-11-25 10:30:39.436912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.525 ms 00:25:32.372 [2024-11-25 10:30:39.436922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.372 [2024-11-25 10:30:39.436991] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:32.372 [2024-11-25 10:30:39.437008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:32.372 [2024-11-25 10:30:39.437335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.437998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.438016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.438029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.438062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.438078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.438091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.438101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.438112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.438122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:32.373 [2024-11-25 10:30:39.438140] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:32.373 [2024-11-25 10:30:39.438150] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7dfaaa21-aae5-4940-8ecb-2cbd33e49460 00:25:32.373 [2024-11-25 10:30:39.438161] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:32.373 [2024-11-25 10:30:39.438170] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:32.373 [2024-11-25 10:30:39.438180] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:32.373 [2024-11-25 10:30:39.438189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:32.373 [2024-11-25 10:30:39.438200] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:32.373 [2024-11-25 10:30:39.438210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:32.373 [2024-11-25 10:30:39.438220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:32.373 [2024-11-25 10:30:39.438229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:32.373 [2024-11-25 10:30:39.438239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:32.373 [2024-11-25 10:30:39.438254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.374 [2024-11-25 10:30:39.438271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:32.374 [2024-11-25 10:30:39.438292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.266 ms 00:25:32.374 [2024-11-25 10:30:39.438309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.374 [2024-11-25 10:30:39.457962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.374 [2024-11-25 10:30:39.457998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:32.374 [2024-11-25 10:30:39.458011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.653 ms 00:25:32.374 [2024-11-25 10:30:39.458021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.374 [2024-11-25 10:30:39.458593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.374 [2024-11-25 10:30:39.458606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:32.374 [2024-11-25 10:30:39.458618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:25:32.374 [2024-11-25 10:30:39.458627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.514302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.514513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:32.632 [2024-11-25 10:30:39.514537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.632 [2024-11-25 10:30:39.514549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.514634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.514646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:32.632 [2024-11-25 10:30:39.514657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.632 [2024-11-25 10:30:39.514667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.514719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.514732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:32.632 [2024-11-25 10:30:39.514743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.632 [2024-11-25 10:30:39.514754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.514773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.514788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:32.632 [2024-11-25 10:30:39.514799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.632 [2024-11-25 10:30:39.514810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.637251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.637334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:32.632 [2024-11-25 10:30:39.637349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.632 [2024-11-25 10:30:39.637360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.736829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.737093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:32.632 [2024-11-25 10:30:39.737122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.632 [2024-11-25 10:30:39.737135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.737209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.737221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:32.632 [2024-11-25 10:30:39.737233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.632 [2024-11-25 10:30:39.737243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.737311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.737324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:32.632 [2024-11-25 10:30:39.737338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.632 [2024-11-25 10:30:39.737348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.737481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.737523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:32.632 [2024-11-25 10:30:39.737535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.632 [2024-11-25 10:30:39.737546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.737589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.737601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:32.632 [2024-11-25 10:30:39.737612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.632 [2024-11-25 10:30:39.737627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.632 [2024-11-25 10:30:39.737666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.632 [2024-11-25 10:30:39.737678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:32.633 [2024-11-25 10:30:39.737688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.633 [2024-11-25 10:30:39.737698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.633 [2024-11-25 10:30:39.737741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.633 [2024-11-25 10:30:39.737752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:32.633 [2024-11-25 10:30:39.737763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.633 [2024-11-25 10:30:39.737777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.633 [2024-11-25 10:30:39.737938] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.034 ms, result 0 00:25:34.008 00:25:34.008 00:25:34.008 10:30:40 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78441 00:25:34.008 10:30:40 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78441 00:25:34.008 10:30:40 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78441 ']' 00:25:34.008 10:30:40 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.008 10:30:40 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.008 10:30:40 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.008 10:30:40 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.008 10:30:40 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:34.008 10:30:40 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:34.008 [2024-11-25 10:30:40.977481] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:25:34.009 [2024-11-25 10:30:40.977622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78441 ] 00:25:34.267 [2024-11-25 10:30:41.155313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.267 [2024-11-25 10:30:41.278274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.210 10:30:42 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.210 10:30:42 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:35.210 10:30:42 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:35.511 [2024-11-25 10:30:42.332149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.511 [2024-11-25 10:30:42.332216] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.511 [2024-11-25 10:30:42.518555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.511 [2024-11-25 10:30:42.518603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:35.511 [2024-11-25 10:30:42.518624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:35.511 [2024-11-25 10:30:42.518635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.511 [2024-11-25 10:30:42.522433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.511 [2024-11-25 10:30:42.522622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.511 [2024-11-25 10:30:42.522651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.781 ms 00:25:35.511 [2024-11-25 10:30:42.522663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.511 [2024-11-25 10:30:42.522826] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:35.511 [2024-11-25 10:30:42.523949] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:35.511 [2024-11-25 10:30:42.523988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.511 [2024-11-25 10:30:42.524000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.511 [2024-11-25 10:30:42.524013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:25:35.511 [2024-11-25 10:30:42.524025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.511 [2024-11-25 10:30:42.525529] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:35.511 [2024-11-25 10:30:42.545170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.511 [2024-11-25 10:30:42.545349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:35.511 [2024-11-25 10:30:42.545374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.678 ms 00:25:35.511 [2024-11-25 10:30:42.545390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.511 [2024-11-25 10:30:42.545543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.511 [2024-11-25 10:30:42.545566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:35.511 [2024-11-25 10:30:42.545578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:35.511 [2024-11-25 10:30:42.545593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.511 [2024-11-25 10:30:42.552335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.511 [2024-11-25 10:30:42.552522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.511 [2024-11-25 10:30:42.552545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.694 ms 00:25:35.511 [2024-11-25 10:30:42.552561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.511 [2024-11-25 10:30:42.552717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.511 [2024-11-25 10:30:42.552737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.511 [2024-11-25 10:30:42.552749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:25:35.511 [2024-11-25 10:30:42.552771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.511 [2024-11-25 10:30:42.552805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.511 [2024-11-25 10:30:42.552822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:35.511 [2024-11-25 10:30:42.552833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:35.511 [2024-11-25 10:30:42.552847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.511 [2024-11-25 10:30:42.552875] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:35.511 [2024-11-25 10:30:42.557729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.512 [2024-11-25 10:30:42.557762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.512 [2024-11-25 10:30:42.557780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.863 ms 00:25:35.512 [2024-11-25 10:30:42.557790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.512 [2024-11-25 10:30:42.557871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.512 [2024-11-25 10:30:42.557883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:35.512 [2024-11-25 10:30:42.557899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:35.512 [2024-11-25 10:30:42.557915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.512 [2024-11-25 10:30:42.557943] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:35.512 [2024-11-25 10:30:42.557968] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:35.512 [2024-11-25 10:30:42.558018] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:35.512 [2024-11-25 10:30:42.558039] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:35.512 [2024-11-25 10:30:42.558134] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:35.512 [2024-11-25 10:30:42.558147] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:35.512 [2024-11-25 10:30:42.558173] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:35.512 [2024-11-25 10:30:42.558186] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:35.512 [2024-11-25 10:30:42.558202] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:35.512 [2024-11-25 10:30:42.558214] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:35.512 [2024-11-25 10:30:42.558229] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:35.512 [2024-11-25 10:30:42.558239] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:35.512 [2024-11-25 10:30:42.558258] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:35.512 [2024-11-25 10:30:42.558269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.512 [2024-11-25 10:30:42.558283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:35.512 [2024-11-25 10:30:42.558294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:25:35.512 [2024-11-25 10:30:42.558309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.512 [2024-11-25 10:30:42.558389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.512 [2024-11-25 10:30:42.558405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:35.512 [2024-11-25 10:30:42.558416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:35.512 [2024-11-25 10:30:42.558430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.512 [2024-11-25 10:30:42.558545] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:35.512 [2024-11-25 10:30:42.558564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:35.512 [2024-11-25 10:30:42.558575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.512 [2024-11-25 10:30:42.558590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.512 [2024-11-25 10:30:42.558601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:35.512 [2024-11-25 10:30:42.558615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:35.512 [2024-11-25 10:30:42.558625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:35.512 [2024-11-25 10:30:42.558647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:35.512 [2024-11-25 10:30:42.558657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:35.512 [2024-11-25 10:30:42.558671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.512 [2024-11-25 10:30:42.558681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:35.512 [2024-11-25 10:30:42.558695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:35.512 [2024-11-25 10:30:42.558704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.512 [2024-11-25 10:30:42.558720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:35.512 [2024-11-25 10:30:42.558730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:35.512 [2024-11-25 10:30:42.558744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.512 [2024-11-25 10:30:42.558755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:35.512 [2024-11-25 10:30:42.558777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:35.512 [2024-11-25 10:30:42.558806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.512 [2024-11-25 10:30:42.558829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:35.512 [2024-11-25 10:30:42.558846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:35.512 [2024-11-25 10:30:42.558863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.512 [2024-11-25 10:30:42.558873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:35.512 [2024-11-25 10:30:42.558891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:35.512 [2024-11-25 10:30:42.558901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.512 [2024-11-25 10:30:42.558915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:35.512 [2024-11-25 10:30:42.558924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:35.512 [2024-11-25 10:30:42.558938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.512 [2024-11-25 10:30:42.558948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:35.512 [2024-11-25 10:30:42.558962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:35.512 [2024-11-25 10:30:42.558972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.512 [2024-11-25 10:30:42.558986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:35.512 [2024-11-25 10:30:42.558995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:35.512 [2024-11-25 10:30:42.559011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.512 [2024-11-25 10:30:42.559021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:35.512 [2024-11-25 10:30:42.559035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:35.512 [2024-11-25 10:30:42.559044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.512 [2024-11-25 10:30:42.559059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:35.512 [2024-11-25 10:30:42.559068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:35.512 [2024-11-25 10:30:42.559086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.512 [2024-11-25 10:30:42.559096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:35.512 [2024-11-25 10:30:42.559110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:35.512 [2024-11-25 10:30:42.559119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.512 [2024-11-25 10:30:42.559133] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:35.512 [2024-11-25 10:30:42.559152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:35.512 [2024-11-25 10:30:42.559175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.512 [2024-11-25 10:30:42.559193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.512 [2024-11-25 10:30:42.559219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:35.513 [2024-11-25 10:30:42.559230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:35.513 [2024-11-25 10:30:42.559245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:35.513 [2024-11-25 10:30:42.559255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:35.513 [2024-11-25 10:30:42.559269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:35.513 [2024-11-25 10:30:42.559279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:35.513 [2024-11-25 10:30:42.559294] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:35.513 [2024-11-25 10:30:42.559308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.513 [2024-11-25 10:30:42.559329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:35.513 [2024-11-25 10:30:42.559340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:35.513 [2024-11-25 10:30:42.559357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:35.513 [2024-11-25 10:30:42.559368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:35.513 [2024-11-25 10:30:42.559384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:35.513 [2024-11-25 10:30:42.559395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:35.513 [2024-11-25 10:30:42.559410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:35.513 [2024-11-25 10:30:42.559421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:35.513 [2024-11-25 10:30:42.559436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:35.513 [2024-11-25 10:30:42.559447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:35.513 [2024-11-25 10:30:42.559462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:35.513 [2024-11-25 10:30:42.559473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:35.513 [2024-11-25 10:30:42.559488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:35.513 [2024-11-25 10:30:42.559512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:35.513 [2024-11-25 10:30:42.559527] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:35.513 [2024-11-25 10:30:42.559539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.513 [2024-11-25 10:30:42.559559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:35.513 [2024-11-25 10:30:42.559575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:35.513 [2024-11-25 10:30:42.559601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:35.513 [2024-11-25 10:30:42.559620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:35.513 [2024-11-25 10:30:42.559644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.513 [2024-11-25 10:30:42.559655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:35.513 [2024-11-25 10:30:42.559671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:25:35.513 [2024-11-25 10:30:42.559686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.513 [2024-11-25 10:30:42.600498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.513 [2024-11-25 10:30:42.600547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.513 [2024-11-25 10:30:42.600567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.790 ms 00:25:35.513 [2024-11-25 10:30:42.600597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.513 [2024-11-25 10:30:42.600750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.513 [2024-11-25 10:30:42.600763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:35.513 [2024-11-25 10:30:42.600780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:35.513 [2024-11-25 10:30:42.600790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.646992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.647046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.773 [2024-11-25 10:30:42.647066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.242 ms 00:25:35.773 [2024-11-25 10:30:42.647092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.647220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.647234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.773 [2024-11-25 10:30:42.647250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:35.773 [2024-11-25 10:30:42.647260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.647731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.647751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.773 [2024-11-25 10:30:42.647767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:25:35.773 [2024-11-25 10:30:42.647777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.647909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.647923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.773 [2024-11-25 10:30:42.647938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:25:35.773 [2024-11-25 10:30:42.647949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.669950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.669995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.773 [2024-11-25 10:30:42.670015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.005 ms 00:25:35.773 [2024-11-25 10:30:42.670026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.689286] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:35.773 [2024-11-25 10:30:42.689482] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:35.773 [2024-11-25 10:30:42.689530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.689542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:35.773 [2024-11-25 10:30:42.689558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.389 ms 00:25:35.773 [2024-11-25 10:30:42.689581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.719000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.719041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:35.773 [2024-11-25 10:30:42.719062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.372 ms 00:25:35.773 [2024-11-25 10:30:42.719072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.737314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.737353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:35.773 [2024-11-25 10:30:42.737377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.175 ms 00:25:35.773 [2024-11-25 10:30:42.737387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.755564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.755626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:35.773 [2024-11-25 10:30:42.755647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.117 ms 00:25:35.773 [2024-11-25 10:30:42.755658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.756515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.756552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:35.773 [2024-11-25 10:30:42.756572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:25:35.773 [2024-11-25 10:30:42.756583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.855749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.773 [2024-11-25 10:30:42.855819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:35.773 [2024-11-25 10:30:42.855858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.284 ms 00:25:35.773 [2024-11-25 10:30:42.855869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.773 [2024-11-25 10:30:42.867130] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:36.032 [2024-11-25 10:30:42.883597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.032 [2024-11-25 10:30:42.883658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.032 [2024-11-25 10:30:42.883696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.604 ms 00:25:36.032 [2024-11-25 10:30:42.883711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.032 [2024-11-25 10:30:42.883822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.032 [2024-11-25 10:30:42.883840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.032 [2024-11-25 10:30:42.883852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:36.032 [2024-11-25 10:30:42.883868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.032 [2024-11-25 10:30:42.883921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.032 [2024-11-25 10:30:42.883937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.032 [2024-11-25 10:30:42.883948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:36.032 [2024-11-25 10:30:42.883969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.032 [2024-11-25 10:30:42.883994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.032 [2024-11-25 10:30:42.884010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.032 [2024-11-25 10:30:42.884020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:36.032 [2024-11-25 10:30:42.884033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.032 [2024-11-25 10:30:42.884070] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.032 [2024-11-25 10:30:42.884087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.032 [2024-11-25 10:30:42.884100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.032 [2024-11-25 10:30:42.884113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:36.032 [2024-11-25 10:30:42.884122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.032 [2024-11-25 10:30:42.920743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.032 [2024-11-25 10:30:42.920783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.032 [2024-11-25 10:30:42.920799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.648 ms 00:25:36.032 [2024-11-25 10:30:42.920810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.033 [2024-11-25 10:30:42.920943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.033 [2024-11-25 10:30:42.920956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.033 [2024-11-25 10:30:42.920983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:36.033 [2024-11-25 10:30:42.920994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.033 [2024-11-25 10:30:42.922015] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.033 [2024-11-25 10:30:42.926359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 403.781 ms, result 0 00:25:36.033 [2024-11-25 10:30:42.927566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:36.033 Some configs were skipped because the RPC state that can call them passed over. 00:25:36.033 10:30:42 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:36.292 [2024-11-25 10:30:43.159260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.292 [2024-11-25 10:30:43.159481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:36.292 [2024-11-25 10:30:43.159608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:25:36.292 [2024-11-25 10:30:43.159660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.292 [2024-11-25 10:30:43.159767] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.101 ms, result 0 00:25:36.292 true 00:25:36.292 10:30:43 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:36.292 [2024-11-25 10:30:43.342899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.292 [2024-11-25 10:30:43.343144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:36.292 [2024-11-25 10:30:43.343258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:25:36.292 [2024-11-25 10:30:43.343362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.292 [2024-11-25 10:30:43.343454] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.776 ms, result 0 00:25:36.292 true 00:25:36.292 10:30:43 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78441 00:25:36.292 10:30:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78441 ']' 00:25:36.292 10:30:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78441 00:25:36.292 10:30:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:36.292 10:30:43 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.292 10:30:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78441 00:25:36.551 killing process with pid 78441 00:25:36.551 10:30:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:36.551 10:30:43 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:36.551 10:30:43 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78441' 00:25:36.551 10:30:43 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78441 00:25:36.551 10:30:43 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78441 00:25:37.487 [2024-11-25 10:30:44.525532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.487 [2024-11-25 10:30:44.525608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:37.487 [2024-11-25 10:30:44.525623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:37.487 [2024-11-25 10:30:44.525636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.487 [2024-11-25 10:30:44.525666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:37.487 [2024-11-25 10:30:44.529909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.487 [2024-11-25 10:30:44.529946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:37.487 [2024-11-25 10:30:44.529964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.228 ms 00:25:37.487 [2024-11-25 10:30:44.529975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.487 [2024-11-25 10:30:44.530241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.487 [2024-11-25 10:30:44.530261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:37.487 [2024-11-25 10:30:44.530282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:25:37.487 [2024-11-25 10:30:44.530301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.487 [2024-11-25 10:30:44.533619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.487 [2024-11-25 10:30:44.533657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:37.487 [2024-11-25 10:30:44.533675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.297 ms 00:25:37.487 [2024-11-25 10:30:44.533685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.487 [2024-11-25 10:30:44.539360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.487 [2024-11-25 10:30:44.539397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:37.487 [2024-11-25 10:30:44.539412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.643 ms 00:25:37.487 [2024-11-25 10:30:44.539422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.487 [2024-11-25 10:30:44.554209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.487 [2024-11-25 10:30:44.554257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:37.487 [2024-11-25 10:30:44.554276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.749 ms 00:25:37.487 [2024-11-25 10:30:44.554286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.487 [2024-11-25 10:30:44.564508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.487 [2024-11-25 10:30:44.564552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:37.487 [2024-11-25 10:30:44.564568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.166 ms 00:25:37.487 [2024-11-25 10:30:44.564579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.487 [2024-11-25 10:30:44.564723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.487 [2024-11-25 10:30:44.564737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:37.487 [2024-11-25 10:30:44.564750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:37.487 [2024-11-25 10:30:44.564760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.487 [2024-11-25 10:30:44.580142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.487 [2024-11-25 10:30:44.580179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:37.487 [2024-11-25 10:30:44.580195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.382 ms 00:25:37.487 [2024-11-25 10:30:44.580205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.487 [2024-11-25 10:30:44.595483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.487 [2024-11-25 10:30:44.595526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:37.487 [2024-11-25 10:30:44.595546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.248 ms 00:25:37.487 [2024-11-25 10:30:44.595555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.747 [2024-11-25 10:30:44.610158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.747 [2024-11-25 10:30:44.610317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:37.747 [2024-11-25 10:30:44.610346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.573 ms 00:25:37.747 [2024-11-25 10:30:44.610357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.747 [2024-11-25 10:30:44.624873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.747 [2024-11-25 10:30:44.625040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:37.747 [2024-11-25 10:30:44.625066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.458 ms 00:25:37.747 [2024-11-25 10:30:44.625075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.747 [2024-11-25 10:30:44.625175] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:37.747 [2024-11-25 10:30:44.625194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.625998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.626014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.626026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:37.747 [2024-11-25 10:30:44.626041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:37.748 [2024-11-25 10:30:44.626696] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:37.748 [2024-11-25 10:30:44.626727] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7dfaaa21-aae5-4940-8ecb-2cbd33e49460 00:25:37.748 [2024-11-25 10:30:44.626745] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:37.748 [2024-11-25 10:30:44.626757] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:37.748 [2024-11-25 10:30:44.626767] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:37.748 [2024-11-25 10:30:44.626779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:37.748 [2024-11-25 10:30:44.626789] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:37.748 [2024-11-25 10:30:44.626801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:37.748 [2024-11-25 10:30:44.626811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:37.748 [2024-11-25 10:30:44.626826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:37.748 [2024-11-25 10:30:44.626841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:37.748 [2024-11-25 10:30:44.626860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.748 [2024-11-25 10:30:44.626871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:37.748 [2024-11-25 10:30:44.626884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:25:37.748 [2024-11-25 10:30:44.626894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.748 [2024-11-25 10:30:44.646982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.748 [2024-11-25 10:30:44.647135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:37.748 [2024-11-25 10:30:44.647171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.082 ms 00:25:37.748 [2024-11-25 10:30:44.647183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.748 [2024-11-25 10:30:44.647781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.748 [2024-11-25 10:30:44.647801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:37.748 [2024-11-25 10:30:44.647823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:25:37.748 [2024-11-25 10:30:44.647834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.748 [2024-11-25 10:30:44.716598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.748 [2024-11-25 10:30:44.716643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:37.748 [2024-11-25 10:30:44.716660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.748 [2024-11-25 10:30:44.716671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.748 [2024-11-25 10:30:44.716779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.748 [2024-11-25 10:30:44.716791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:37.748 [2024-11-25 10:30:44.716808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.748 [2024-11-25 10:30:44.716819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.748 [2024-11-25 10:30:44.716876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.748 [2024-11-25 10:30:44.716896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:37.748 [2024-11-25 10:30:44.716918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.748 [2024-11-25 10:30:44.716935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.748 [2024-11-25 10:30:44.716965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.748 [2024-11-25 10:30:44.716976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:37.748 [2024-11-25 10:30:44.716988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.748 [2024-11-25 10:30:44.717001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.748 [2024-11-25 10:30:44.842967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.748 [2024-11-25 10:30:44.843046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:37.748 [2024-11-25 10:30:44.843086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.748 [2024-11-25 10:30:44.843097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.007 [2024-11-25 10:30:44.943990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.007 [2024-11-25 10:30:44.944217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.007 [2024-11-25 10:30:44.944258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.007 [2024-11-25 10:30:44.944278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.007 [2024-11-25 10:30:44.944405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.007 [2024-11-25 10:30:44.944419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:38.007 [2024-11-25 10:30:44.944440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.007 [2024-11-25 10:30:44.944451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.007 [2024-11-25 10:30:44.944485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.007 [2024-11-25 10:30:44.944519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:38.007 [2024-11-25 10:30:44.944535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.007 [2024-11-25 10:30:44.944545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.007 [2024-11-25 10:30:44.944682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.007 [2024-11-25 10:30:44.944695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:38.007 [2024-11-25 10:30:44.944710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.007 [2024-11-25 10:30:44.944721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.007 [2024-11-25 10:30:44.944765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.007 [2024-11-25 10:30:44.944777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:38.007 [2024-11-25 10:30:44.944792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.007 [2024-11-25 10:30:44.944802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.007 [2024-11-25 10:30:44.944851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.008 [2024-11-25 10:30:44.944863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:38.008 [2024-11-25 10:30:44.944883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.008 [2024-11-25 10:30:44.944893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.008 [2024-11-25 10:30:44.944942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.008 [2024-11-25 10:30:44.944953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:38.008 [2024-11-25 10:30:44.944968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.008 [2024-11-25 10:30:44.944978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.008 [2024-11-25 10:30:44.945127] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.242 ms, result 0 00:25:38.945 10:30:45 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:38.946 10:30:45 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:39.205 [2024-11-25 10:30:46.064039] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:25:39.205 [2024-11-25 10:30:46.064229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78509 ] 00:25:39.205 [2024-11-25 10:30:46.248335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.464 [2024-11-25 10:30:46.358610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.722 [2024-11-25 10:30:46.714326] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:39.722 [2024-11-25 10:30:46.714627] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:39.984 [2024-11-25 10:30:46.876222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.984 [2024-11-25 10:30:46.876277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:39.984 [2024-11-25 10:30:46.876293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:39.984 [2024-11-25 10:30:46.876304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.984 [2024-11-25 10:30:46.879451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.984 [2024-11-25 10:30:46.879516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:39.984 [2024-11-25 10:30:46.879546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:25:39.984 [2024-11-25 10:30:46.879557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.984 [2024-11-25 10:30:46.879653] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:39.984 [2024-11-25 10:30:46.880634] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:39.984 [2024-11-25 10:30:46.880671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.984 [2024-11-25 10:30:46.880683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:39.984 [2024-11-25 10:30:46.880694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:25:39.984 [2024-11-25 10:30:46.880704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.984 [2024-11-25 10:30:46.882360] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:39.984 [2024-11-25 10:30:46.901517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.984 [2024-11-25 10:30:46.901556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:39.984 [2024-11-25 10:30:46.901570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.188 ms 00:25:39.984 [2024-11-25 10:30:46.901581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.984 [2024-11-25 10:30:46.901680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.984 [2024-11-25 10:30:46.901694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:39.984 [2024-11-25 10:30:46.901706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:39.984 [2024-11-25 10:30:46.901716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.984 [2024-11-25 10:30:46.908475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.984 [2024-11-25 10:30:46.908518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:39.984 [2024-11-25 10:30:46.908530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.729 ms 00:25:39.984 [2024-11-25 10:30:46.908540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.984 [2024-11-25 10:30:46.908653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.984 [2024-11-25 10:30:46.908668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:39.984 [2024-11-25 10:30:46.908680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:39.984 [2024-11-25 10:30:46.908690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.984 [2024-11-25 10:30:46.908724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.984 [2024-11-25 10:30:46.908735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:39.984 [2024-11-25 10:30:46.908746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:39.984 [2024-11-25 10:30:46.908756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.984 [2024-11-25 10:30:46.908779] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:39.984 [2024-11-25 10:30:46.913665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.984 [2024-11-25 10:30:46.913699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:39.984 [2024-11-25 10:30:46.913711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.899 ms 00:25:39.984 [2024-11-25 10:30:46.913721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.984 [2024-11-25 10:30:46.913788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.984 [2024-11-25 10:30:46.913800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:39.984 [2024-11-25 10:30:46.913811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:39.984 [2024-11-25 10:30:46.913825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.984 [2024-11-25 10:30:46.913849] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:39.984 [2024-11-25 10:30:46.913870] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:39.984 [2024-11-25 10:30:46.913905] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:39.984 [2024-11-25 10:30:46.913922] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:39.984 [2024-11-25 10:30:46.914010] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:39.984 [2024-11-25 10:30:46.914024] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:39.984 [2024-11-25 10:30:46.914039] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:39.984 [2024-11-25 10:30:46.914053] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:39.984 [2024-11-25 10:30:46.914064] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:39.984 [2024-11-25 10:30:46.914076] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:39.984 [2024-11-25 10:30:46.914086] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:39.984 [2024-11-25 10:30:46.914095] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:39.985 [2024-11-25 10:30:46.914105] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:39.985 [2024-11-25 10:30:46.914116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-25 10:30:46.914126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:39.985 [2024-11-25 10:30:46.914136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:25:39.985 [2024-11-25 10:30:46.914147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-25 10:30:46.914225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-25 10:30:46.914236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:39.985 [2024-11-25 10:30:46.914246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:39.985 [2024-11-25 10:30:46.914256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-25 10:30:46.914347] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:39.985 [2024-11-25 10:30:46.914359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:39.985 [2024-11-25 10:30:46.914370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:39.985 [2024-11-25 10:30:46.914380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:39.985 [2024-11-25 10:30:46.914400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:39.985 [2024-11-25 10:30:46.914420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:39.985 [2024-11-25 10:30:46.914429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:39.985 [2024-11-25 10:30:46.914448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:39.985 [2024-11-25 10:30:46.914469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:39.985 [2024-11-25 10:30:46.914479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:39.985 [2024-11-25 10:30:46.914488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:39.985 [2024-11-25 10:30:46.914522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:39.985 [2024-11-25 10:30:46.914532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:39.985 [2024-11-25 10:30:46.914551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:39.985 [2024-11-25 10:30:46.914560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:39.985 [2024-11-25 10:30:46.914595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.985 [2024-11-25 10:30:46.914614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:39.985 [2024-11-25 10:30:46.914624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.985 [2024-11-25 10:30:46.914642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:39.985 [2024-11-25 10:30:46.914651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.985 [2024-11-25 10:30:46.914670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:39.985 [2024-11-25 10:30:46.914679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:39.985 [2024-11-25 10:30:46.914697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:39.985 [2024-11-25 10:30:46.914706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:39.985 [2024-11-25 10:30:46.914724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:39.985 [2024-11-25 10:30:46.914732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:39.985 [2024-11-25 10:30:46.914741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:39.985 [2024-11-25 10:30:46.914750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:39.985 [2024-11-25 10:30:46.914759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:39.985 [2024-11-25 10:30:46.914768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:39.985 [2024-11-25 10:30:46.914786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:39.985 [2024-11-25 10:30:46.914794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914803] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:39.985 [2024-11-25 10:30:46.914819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:39.985 [2024-11-25 10:30:46.914830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:39.985 [2024-11-25 10:30:46.914840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:39.985 [2024-11-25 10:30:46.914850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:39.985 [2024-11-25 10:30:46.914860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:39.985 [2024-11-25 10:30:46.914872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:39.985 [2024-11-25 10:30:46.914887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:39.985 [2024-11-25 10:30:46.914902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:39.985 [2024-11-25 10:30:46.914914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:39.985 [2024-11-25 10:30:46.914931] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:39.985 [2024-11-25 10:30:46.914949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:39.985 [2024-11-25 10:30:46.914966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:39.985 [2024-11-25 10:30:46.914983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:39.985 [2024-11-25 10:30:46.915001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:39.985 [2024-11-25 10:30:46.915014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:39.985 [2024-11-25 10:30:46.915025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:39.985 [2024-11-25 10:30:46.915035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:39.985 [2024-11-25 10:30:46.915045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:39.985 [2024-11-25 10:30:46.915055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:39.985 [2024-11-25 10:30:46.915066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:39.985 [2024-11-25 10:30:46.915077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:39.985 [2024-11-25 10:30:46.915087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:39.985 [2024-11-25 10:30:46.915096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:39.985 [2024-11-25 10:30:46.915106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:39.985 [2024-11-25 10:30:46.915116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:39.985 [2024-11-25 10:30:46.915127] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:39.985 [2024-11-25 10:30:46.915138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:39.985 [2024-11-25 10:30:46.915154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:39.985 [2024-11-25 10:30:46.915164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:39.985 [2024-11-25 10:30:46.915175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:39.985 [2024-11-25 10:30:46.915185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:39.985 [2024-11-25 10:30:46.915197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-25 10:30:46.915207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:39.985 [2024-11-25 10:30:46.915219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:25:39.985 [2024-11-25 10:30:46.915229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-25 10:30:46.954427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-25 10:30:46.954469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:39.985 [2024-11-25 10:30:46.954484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.185 ms 00:25:39.985 [2024-11-25 10:30:46.954514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-25 10:30:46.954645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-25 10:30:46.954658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:39.985 [2024-11-25 10:30:46.954670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:39.985 [2024-11-25 10:30:46.954680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.985 [2024-11-25 10:30:47.009225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.985 [2024-11-25 10:30:47.009437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:39.985 [2024-11-25 10:30:47.009462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.609 ms 00:25:39.985 [2024-11-25 10:30:47.009473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.986 [2024-11-25 10:30:47.009597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.986 [2024-11-25 10:30:47.009612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:39.986 [2024-11-25 10:30:47.009623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:39.986 [2024-11-25 10:30:47.009634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.986 [2024-11-25 10:30:47.010069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.986 [2024-11-25 10:30:47.010083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:39.986 [2024-11-25 10:30:47.010100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:25:39.986 [2024-11-25 10:30:47.010110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.986 [2024-11-25 10:30:47.010227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.986 [2024-11-25 10:30:47.010241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:39.986 [2024-11-25 10:30:47.010252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:39.986 [2024-11-25 10:30:47.010262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.986 [2024-11-25 10:30:47.028480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.986 [2024-11-25 10:30:47.028522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:39.986 [2024-11-25 10:30:47.028537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.225 ms 00:25:39.986 [2024-11-25 10:30:47.028548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.986 [2024-11-25 10:30:47.047921] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:39.986 [2024-11-25 10:30:47.048102] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:39.986 [2024-11-25 10:30:47.048123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.986 [2024-11-25 10:30:47.048135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:39.986 [2024-11-25 10:30:47.048146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.494 ms 00:25:39.986 [2024-11-25 10:30:47.048156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.986 [2024-11-25 10:30:47.077904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.986 [2024-11-25 10:30:47.077947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:39.986 [2024-11-25 10:30:47.077961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.712 ms 00:25:39.986 [2024-11-25 10:30:47.077972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.095983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.096024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:40.250 [2024-11-25 10:30:47.096037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.950 ms 00:25:40.250 [2024-11-25 10:30:47.096047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.114200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.114363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:40.250 [2024-11-25 10:30:47.114384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.105 ms 00:25:40.250 [2024-11-25 10:30:47.114395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.115162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.115198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:40.250 [2024-11-25 10:30:47.115218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:25:40.250 [2024-11-25 10:30:47.115229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.201232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.201448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:40.250 [2024-11-25 10:30:47.201476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.112 ms 00:25:40.250 [2024-11-25 10:30:47.201487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.212645] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:40.250 [2024-11-25 10:30:47.229036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.229090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:40.250 [2024-11-25 10:30:47.229113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.459 ms 00:25:40.250 [2024-11-25 10:30:47.229124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.229261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.229275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:40.250 [2024-11-25 10:30:47.229295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:40.250 [2024-11-25 10:30:47.229305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.229362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.229374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:40.250 [2024-11-25 10:30:47.229389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:40.250 [2024-11-25 10:30:47.229403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.229427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.229438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:40.250 [2024-11-25 10:30:47.229448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:40.250 [2024-11-25 10:30:47.229458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.229520] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:40.250 [2024-11-25 10:30:47.229535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.229545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:40.250 [2024-11-25 10:30:47.229556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:40.250 [2024-11-25 10:30:47.229566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.266445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.266486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:40.250 [2024-11-25 10:30:47.266529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.908 ms 00:25:40.250 [2024-11-25 10:30:47.266540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.266670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.250 [2024-11-25 10:30:47.266684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:40.250 [2024-11-25 10:30:47.266695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:40.250 [2024-11-25 10:30:47.266710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.250 [2024-11-25 10:30:47.267669] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:40.250 [2024-11-25 10:30:47.271770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.761 ms, result 0 00:25:40.250 [2024-11-25 10:30:47.272577] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:40.250 [2024-11-25 10:30:47.290958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:41.187  [2024-11-25T10:30:49.675Z] Copying: 29/256 [MB] (29 MBps) [2024-11-25T10:30:50.611Z] Copying: 55/256 [MB] (26 MBps) [2024-11-25T10:30:51.549Z] Copying: 82/256 [MB] (26 MBps) [2024-11-25T10:30:52.485Z] Copying: 109/256 [MB] (26 MBps) [2024-11-25T10:30:53.431Z] Copying: 135/256 [MB] (26 MBps) [2024-11-25T10:30:54.369Z] Copying: 161/256 [MB] (26 MBps) [2024-11-25T10:30:55.306Z] Copying: 188/256 [MB] (26 MBps) [2024-11-25T10:30:56.683Z] Copying: 215/256 [MB] (27 MBps) [2024-11-25T10:30:56.942Z] Copying: 242/256 [MB] (27 MBps) [2024-11-25T10:30:56.942Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-25 10:30:56.771759] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:49.831 [2024-11-25 10:30:56.786379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.831 [2024-11-25 10:30:56.786578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:49.831 [2024-11-25 10:30:56.786618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:49.831 [2024-11-25 10:30:56.786629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.831 [2024-11-25 10:30:56.786663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:49.831 [2024-11-25 10:30:56.790817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.831 [2024-11-25 10:30:56.790850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:49.831 [2024-11-25 10:30:56.790863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.144 ms 00:25:49.831 [2024-11-25 10:30:56.790873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.831 [2024-11-25 10:30:56.791100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.831 [2024-11-25 10:30:56.791112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:49.831 [2024-11-25 10:30:56.791123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:25:49.831 [2024-11-25 10:30:56.791133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.831 [2024-11-25 10:30:56.794018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.831 [2024-11-25 10:30:56.794165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:49.831 [2024-11-25 10:30:56.794186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.865 ms 00:25:49.831 [2024-11-25 10:30:56.794196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.831 [2024-11-25 10:30:56.799862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.831 [2024-11-25 10:30:56.799897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:49.831 [2024-11-25 10:30:56.799910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.647 ms 00:25:49.831 [2024-11-25 10:30:56.799920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.831 [2024-11-25 10:30:56.835879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.831 [2024-11-25 10:30:56.836042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:49.831 [2024-11-25 10:30:56.836064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.943 ms 00:25:49.831 [2024-11-25 10:30:56.836074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.831 [2024-11-25 10:30:56.857557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.831 [2024-11-25 10:30:56.857605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:49.831 [2024-11-25 10:30:56.857619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.460 ms 00:25:49.831 [2024-11-25 10:30:56.857629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.831 [2024-11-25 10:30:56.857761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.831 [2024-11-25 10:30:56.857774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:49.831 [2024-11-25 10:30:56.857802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:49.831 [2024-11-25 10:30:56.857813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.831 [2024-11-25 10:30:56.895148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.831 [2024-11-25 10:30:56.895187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:49.831 [2024-11-25 10:30:56.895200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.376 ms 00:25:49.831 [2024-11-25 10:30:56.895209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:49.831 [2024-11-25 10:30:56.930809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:49.831 [2024-11-25 10:30:56.930847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:49.831 [2024-11-25 10:30:56.930861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.600 ms 00:25:49.831 [2024-11-25 10:30:56.930871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.091 [2024-11-25 10:30:56.965920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.091 [2024-11-25 10:30:56.966087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:50.091 [2024-11-25 10:30:56.966109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.032 ms 00:25:50.091 [2024-11-25 10:30:56.966120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.091 [2024-11-25 10:30:57.002268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.091 [2024-11-25 10:30:57.002308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:50.091 [2024-11-25 10:30:57.002321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.117 ms 00:25:50.091 [2024-11-25 10:30:57.002331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.091 [2024-11-25 10:30:57.002386] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:50.091 [2024-11-25 10:30:57.002403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:50.091 [2024-11-25 10:30:57.002761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.002992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:50.092 [2024-11-25 10:30:57.003651] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:50.092 [2024-11-25 10:30:57.003662] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7dfaaa21-aae5-4940-8ecb-2cbd33e49460 00:25:50.092 [2024-11-25 10:30:57.003672] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:50.092 [2024-11-25 10:30:57.003682] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:50.092 [2024-11-25 10:30:57.003692] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:50.092 [2024-11-25 10:30:57.003702] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:50.092 [2024-11-25 10:30:57.003711] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:50.092 [2024-11-25 10:30:57.003722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:50.092 [2024-11-25 10:30:57.003741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:50.092 [2024-11-25 10:30:57.003757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:50.092 [2024-11-25 10:30:57.003772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:50.092 [2024-11-25 10:30:57.003785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.092 [2024-11-25 10:30:57.003795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:50.092 [2024-11-25 10:30:57.003806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.402 ms 00:25:50.092 [2024-11-25 10:30:57.003817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.092 [2024-11-25 10:30:57.023672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.092 [2024-11-25 10:30:57.023709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:50.092 [2024-11-25 10:30:57.023722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.862 ms 00:25:50.092 [2024-11-25 10:30:57.023743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.092 [2024-11-25 10:30:57.024303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.092 [2024-11-25 10:30:57.024319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:50.092 [2024-11-25 10:30:57.024334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:25:50.092 [2024-11-25 10:30:57.024351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.092 [2024-11-25 10:30:57.079903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.092 [2024-11-25 10:30:57.079944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.092 [2024-11-25 10:30:57.079958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.093 [2024-11-25 10:30:57.079978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.093 [2024-11-25 10:30:57.080081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.093 [2024-11-25 10:30:57.080093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.093 [2024-11-25 10:30:57.080104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.093 [2024-11-25 10:30:57.080114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.093 [2024-11-25 10:30:57.080168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.093 [2024-11-25 10:30:57.080181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.093 [2024-11-25 10:30:57.080192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.093 [2024-11-25 10:30:57.080202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.093 [2024-11-25 10:30:57.080229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.093 [2024-11-25 10:30:57.080239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.093 [2024-11-25 10:30:57.080249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.093 [2024-11-25 10:30:57.080259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.352 [2024-11-25 10:30:57.204649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.352 [2024-11-25 10:30:57.204709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.352 [2024-11-25 10:30:57.204725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.352 [2024-11-25 10:30:57.204742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.352 [2024-11-25 10:30:57.305017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.352 [2024-11-25 10:30:57.305078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.352 [2024-11-25 10:30:57.305093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.352 [2024-11-25 10:30:57.305103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.352 [2024-11-25 10:30:57.305197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.352 [2024-11-25 10:30:57.305209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:50.352 [2024-11-25 10:30:57.305221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.352 [2024-11-25 10:30:57.305230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.352 [2024-11-25 10:30:57.305259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.352 [2024-11-25 10:30:57.305277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:50.352 [2024-11-25 10:30:57.305296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.352 [2024-11-25 10:30:57.305306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.352 [2024-11-25 10:30:57.305413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.352 [2024-11-25 10:30:57.305427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:50.352 [2024-11-25 10:30:57.305438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.352 [2024-11-25 10:30:57.305449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.352 [2024-11-25 10:30:57.305489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.352 [2024-11-25 10:30:57.305533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:50.352 [2024-11-25 10:30:57.305543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.352 [2024-11-25 10:30:57.305553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.352 [2024-11-25 10:30:57.305591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.352 [2024-11-25 10:30:57.305602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:50.352 [2024-11-25 10:30:57.305612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.352 [2024-11-25 10:30:57.305621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.352 [2024-11-25 10:30:57.305665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:50.352 [2024-11-25 10:30:57.305680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:50.352 [2024-11-25 10:30:57.305691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:50.352 [2024-11-25 10:30:57.305700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.352 [2024-11-25 10:30:57.305836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.292 ms, result 0 00:25:51.291 00:25:51.291 00:25:51.291 10:30:58 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:25:51.291 10:30:58 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:51.859 10:30:58 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:51.859 [2024-11-25 10:30:58.885817] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:25:51.859 [2024-11-25 10:30:58.885934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78642 ] 00:25:52.119 [2024-11-25 10:30:59.066010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.119 [2024-11-25 10:30:59.185142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.689 [2024-11-25 10:30:59.540432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:52.689 [2024-11-25 10:30:59.540518] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:52.689 [2024-11-25 10:30:59.703397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.689 [2024-11-25 10:30:59.703676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:52.689 [2024-11-25 10:30:59.703705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:52.689 [2024-11-25 10:30:59.703718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.689 [2024-11-25 10:30:59.707042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.689 [2024-11-25 10:30:59.707218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.689 [2024-11-25 10:30:59.707242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.295 ms 00:25:52.689 [2024-11-25 10:30:59.707253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.689 [2024-11-25 10:30:59.707446] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:52.689 [2024-11-25 10:30:59.708518] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:52.689 [2024-11-25 10:30:59.708555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.689 [2024-11-25 10:30:59.708567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.689 [2024-11-25 10:30:59.708578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:25:52.689 [2024-11-25 10:30:59.708589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.689 [2024-11-25 10:30:59.710110] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:52.689 [2024-11-25 10:30:59.729993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.689 [2024-11-25 10:30:59.730033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:52.689 [2024-11-25 10:30:59.730048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.916 ms 00:25:52.689 [2024-11-25 10:30:59.730059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.689 [2024-11-25 10:30:59.730158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.689 [2024-11-25 10:30:59.730173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:52.689 [2024-11-25 10:30:59.730184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:52.689 [2024-11-25 10:30:59.730194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.689 [2024-11-25 10:30:59.736878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.689 [2024-11-25 10:30:59.736907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.689 [2024-11-25 10:30:59.736919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.652 ms 00:25:52.689 [2024-11-25 10:30:59.736929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.689 [2024-11-25 10:30:59.737031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.689 [2024-11-25 10:30:59.737046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.689 [2024-11-25 10:30:59.737057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:52.689 [2024-11-25 10:30:59.737067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.689 [2024-11-25 10:30:59.737098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.689 [2024-11-25 10:30:59.737109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:52.689 [2024-11-25 10:30:59.737120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:52.689 [2024-11-25 10:30:59.737129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.689 [2024-11-25 10:30:59.737152] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:52.689 [2024-11-25 10:30:59.742070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.689 [2024-11-25 10:30:59.742105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.689 [2024-11-25 10:30:59.742118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.930 ms 00:25:52.689 [2024-11-25 10:30:59.742128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.689 [2024-11-25 10:30:59.742195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.689 [2024-11-25 10:30:59.742208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:52.689 [2024-11-25 10:30:59.742218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:52.689 [2024-11-25 10:30:59.742228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.689 [2024-11-25 10:30:59.742255] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:52.689 [2024-11-25 10:30:59.742276] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:52.689 [2024-11-25 10:30:59.742310] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:52.689 [2024-11-25 10:30:59.742329] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:52.689 [2024-11-25 10:30:59.742418] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:52.690 [2024-11-25 10:30:59.742431] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:52.690 [2024-11-25 10:30:59.742445] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:52.690 [2024-11-25 10:30:59.742462] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:52.690 [2024-11-25 10:30:59.742474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:52.690 [2024-11-25 10:30:59.742485] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:52.690 [2024-11-25 10:30:59.742515] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:52.690 [2024-11-25 10:30:59.742525] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:52.690 [2024-11-25 10:30:59.742535] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:52.690 [2024-11-25 10:30:59.742546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.690 [2024-11-25 10:30:59.742556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:52.690 [2024-11-25 10:30:59.742567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:25:52.690 [2024-11-25 10:30:59.742577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.690 [2024-11-25 10:30:59.742653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.690 [2024-11-25 10:30:59.742667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:52.690 [2024-11-25 10:30:59.742678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:52.690 [2024-11-25 10:30:59.742688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.690 [2024-11-25 10:30:59.742777] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:52.690 [2024-11-25 10:30:59.742790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:52.690 [2024-11-25 10:30:59.742800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:52.690 [2024-11-25 10:30:59.742811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.690 [2024-11-25 10:30:59.742821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:52.690 [2024-11-25 10:30:59.742830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:52.690 [2024-11-25 10:30:59.742839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:52.690 [2024-11-25 10:30:59.742848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:52.690 [2024-11-25 10:30:59.742858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:52.690 [2024-11-25 10:30:59.742867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:52.690 [2024-11-25 10:30:59.742876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:52.690 [2024-11-25 10:30:59.742899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:52.690 [2024-11-25 10:30:59.742908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:52.690 [2024-11-25 10:30:59.742918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:52.690 [2024-11-25 10:30:59.742927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:52.690 [2024-11-25 10:30:59.742937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.690 [2024-11-25 10:30:59.742946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:52.690 [2024-11-25 10:30:59.742955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:52.690 [2024-11-25 10:30:59.742964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.690 [2024-11-25 10:30:59.742979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:52.690 [2024-11-25 10:30:59.742995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:52.690 [2024-11-25 10:30:59.743008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.690 [2024-11-25 10:30:59.743022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:52.690 [2024-11-25 10:30:59.743036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:52.690 [2024-11-25 10:30:59.743052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.690 [2024-11-25 10:30:59.743068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:52.690 [2024-11-25 10:30:59.743077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:52.690 [2024-11-25 10:30:59.743087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.690 [2024-11-25 10:30:59.743096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:52.690 [2024-11-25 10:30:59.743105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:52.690 [2024-11-25 10:30:59.743114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.690 [2024-11-25 10:30:59.743123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:52.690 [2024-11-25 10:30:59.743132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:52.690 [2024-11-25 10:30:59.743141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:52.690 [2024-11-25 10:30:59.743150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:52.690 [2024-11-25 10:30:59.743159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:52.690 [2024-11-25 10:30:59.743168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:52.690 [2024-11-25 10:30:59.743177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:52.690 [2024-11-25 10:30:59.743186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:52.690 [2024-11-25 10:30:59.743194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.690 [2024-11-25 10:30:59.743203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:52.690 [2024-11-25 10:30:59.743212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:52.690 [2024-11-25 10:30:59.743221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.690 [2024-11-25 10:30:59.743230] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:52.690 [2024-11-25 10:30:59.743240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:52.690 [2024-11-25 10:30:59.743254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:52.690 [2024-11-25 10:30:59.743266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.690 [2024-11-25 10:30:59.743283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:52.690 [2024-11-25 10:30:59.743297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:52.690 [2024-11-25 10:30:59.743310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:52.690 [2024-11-25 10:30:59.743325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:52.690 [2024-11-25 10:30:59.743342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:52.690 [2024-11-25 10:30:59.743354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:52.690 [2024-11-25 10:30:59.743366] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:52.690 [2024-11-25 10:30:59.743379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.690 [2024-11-25 10:30:59.743390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:52.690 [2024-11-25 10:30:59.743401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:52.690 [2024-11-25 10:30:59.743411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:52.690 [2024-11-25 10:30:59.743421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:52.690 [2024-11-25 10:30:59.743431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:52.690 [2024-11-25 10:30:59.743442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:52.690 [2024-11-25 10:30:59.743453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:52.690 [2024-11-25 10:30:59.743462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:52.691 [2024-11-25 10:30:59.743472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:52.691 [2024-11-25 10:30:59.743482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:52.691 [2024-11-25 10:30:59.743509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:52.691 [2024-11-25 10:30:59.743528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:52.691 [2024-11-25 10:30:59.743546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:52.691 [2024-11-25 10:30:59.743563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:52.691 [2024-11-25 10:30:59.743574] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:52.691 [2024-11-25 10:30:59.743585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.691 [2024-11-25 10:30:59.743596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:52.691 [2024-11-25 10:30:59.743607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:52.691 [2024-11-25 10:30:59.743618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:52.691 [2024-11-25 10:30:59.743628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:52.691 [2024-11-25 10:30:59.743642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.691 [2024-11-25 10:30:59.743658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:52.691 [2024-11-25 10:30:59.743670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:25:52.691 [2024-11-25 10:30:59.743680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.691 [2024-11-25 10:30:59.783319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.691 [2024-11-25 10:30:59.783531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:52.691 [2024-11-25 10:30:59.783555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.639 ms 00:25:52.691 [2024-11-25 10:30:59.783566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.691 [2024-11-25 10:30:59.783705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.691 [2024-11-25 10:30:59.783717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:52.691 [2024-11-25 10:30:59.783728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:52.691 [2024-11-25 10:30:59.783738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.950 [2024-11-25 10:30:59.839805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.950 [2024-11-25 10:30:59.839844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:52.950 [2024-11-25 10:30:59.839861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.135 ms 00:25:52.950 [2024-11-25 10:30:59.839881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.950 [2024-11-25 10:30:59.839983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.950 [2024-11-25 10:30:59.839995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:52.950 [2024-11-25 10:30:59.840006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:52.950 [2024-11-25 10:30:59.840017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.950 [2024-11-25 10:30:59.840453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.950 [2024-11-25 10:30:59.840466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:52.950 [2024-11-25 10:30:59.840477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:25:52.950 [2024-11-25 10:30:59.840491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.950 [2024-11-25 10:30:59.840647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.950 [2024-11-25 10:30:59.840661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:52.950 [2024-11-25 10:30:59.840673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:25:52.950 [2024-11-25 10:30:59.840682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.950 [2024-11-25 10:30:59.860154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.950 [2024-11-25 10:30:59.860190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:52.950 [2024-11-25 10:30:59.860203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.482 ms 00:25:52.950 [2024-11-25 10:30:59.860214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.950 [2024-11-25 10:30:59.879345] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:52.950 [2024-11-25 10:30:59.879384] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:52.950 [2024-11-25 10:30:59.879400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.950 [2024-11-25 10:30:59.879411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:52.950 [2024-11-25 10:30:59.879422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.114 ms 00:25:52.950 [2024-11-25 10:30:59.879433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.950 [2024-11-25 10:30:59.909196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.950 [2024-11-25 10:30:59.909237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:52.950 [2024-11-25 10:30:59.909250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.707 ms 00:25:52.950 [2024-11-25 10:30:59.909261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.950 [2024-11-25 10:30:59.927654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.950 [2024-11-25 10:30:59.927690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:52.950 [2024-11-25 10:30:59.927703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.336 ms 00:25:52.951 [2024-11-25 10:30:59.927729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.951 [2024-11-25 10:30:59.945971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.951 [2024-11-25 10:30:59.946008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:52.951 [2024-11-25 10:30:59.946020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.194 ms 00:25:52.951 [2024-11-25 10:30:59.946031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.951 [2024-11-25 10:30:59.946838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.951 [2024-11-25 10:30:59.946863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:52.951 [2024-11-25 10:30:59.946876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:25:52.951 [2024-11-25 10:30:59.946886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.951 [2024-11-25 10:31:00.031988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.951 [2024-11-25 10:31:00.032042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:52.951 [2024-11-25 10:31:00.032058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.211 ms 00:25:52.951 [2024-11-25 10:31:00.032069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.951 [2024-11-25 10:31:00.043852] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:53.210 [2024-11-25 10:31:00.060347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-25 10:31:00.060398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:53.210 [2024-11-25 10:31:00.060414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.197 ms 00:25:53.210 [2024-11-25 10:31:00.060431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-25 10:31:00.060585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-25 10:31:00.060600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:53.210 [2024-11-25 10:31:00.060611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:53.210 [2024-11-25 10:31:00.060622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-25 10:31:00.060680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-25 10:31:00.060692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:53.210 [2024-11-25 10:31:00.060703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:53.210 [2024-11-25 10:31:00.060718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-25 10:31:00.060745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-25 10:31:00.060755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:53.210 [2024-11-25 10:31:00.060765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:53.210 [2024-11-25 10:31:00.060775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-25 10:31:00.060811] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:53.210 [2024-11-25 10:31:00.060823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-25 10:31:00.060833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:53.210 [2024-11-25 10:31:00.060843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:53.210 [2024-11-25 10:31:00.060853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-25 10:31:00.097752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-25 10:31:00.097927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:53.210 [2024-11-25 10:31:00.097951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.937 ms 00:25:53.210 [2024-11-25 10:31:00.097962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-25 10:31:00.098084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.210 [2024-11-25 10:31:00.098099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:53.210 [2024-11-25 10:31:00.098110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:53.210 [2024-11-25 10:31:00.098120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.210 [2024-11-25 10:31:00.099013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:53.211 [2024-11-25 10:31:00.103159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.964 ms, result 0 00:25:53.211 [2024-11-25 10:31:00.104014] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.211 [2024-11-25 10:31:00.122661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:53.211  [2024-11-25T10:31:00.323Z] Copying: 4096/4096 [kB] (average 25 MBps)[2024-11-25 10:31:00.283425] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.211 [2024-11-25 10:31:00.297606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.211 [2024-11-25 10:31:00.297645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:53.211 [2024-11-25 10:31:00.297659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:53.211 [2024-11-25 10:31:00.297675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.211 [2024-11-25 10:31:00.297697] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:53.211 [2024-11-25 10:31:00.301809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.211 [2024-11-25 10:31:00.301843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:53.211 [2024-11-25 10:31:00.301855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.103 ms 00:25:53.211 [2024-11-25 10:31:00.301864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.211 [2024-11-25 10:31:00.303622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.211 [2024-11-25 10:31:00.303657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:53.211 [2024-11-25 10:31:00.303669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.733 ms 00:25:53.211 [2024-11-25 10:31:00.303679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.211 [2024-11-25 10:31:00.306957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.211 [2024-11-25 10:31:00.306999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:53.211 [2024-11-25 10:31:00.307011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:25:53.211 [2024-11-25 10:31:00.307021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.211 [2024-11-25 10:31:00.312666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.211 [2024-11-25 10:31:00.312823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:53.211 [2024-11-25 10:31:00.312846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.622 ms 00:25:53.211 [2024-11-25 10:31:00.312856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.472 [2024-11-25 10:31:00.349704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.472 [2024-11-25 10:31:00.349742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:53.472 [2024-11-25 10:31:00.349755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.842 ms 00:25:53.472 [2024-11-25 10:31:00.349765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.472 [2024-11-25 10:31:00.370823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.472 [2024-11-25 10:31:00.370861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:53.472 [2024-11-25 10:31:00.370880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.037 ms 00:25:53.472 [2024-11-25 10:31:00.370890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.472 [2024-11-25 10:31:00.371033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.472 [2024-11-25 10:31:00.371047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:53.472 [2024-11-25 10:31:00.371070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:53.472 [2024-11-25 10:31:00.371080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.472 [2024-11-25 10:31:00.407423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.472 [2024-11-25 10:31:00.407603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:53.472 [2024-11-25 10:31:00.407625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.384 ms 00:25:53.472 [2024-11-25 10:31:00.407635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.472 [2024-11-25 10:31:00.444248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.472 [2024-11-25 10:31:00.444286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:53.472 [2024-11-25 10:31:00.444299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.617 ms 00:25:53.472 [2024-11-25 10:31:00.444309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.472 [2024-11-25 10:31:00.480489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.472 [2024-11-25 10:31:00.480531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:53.472 [2024-11-25 10:31:00.480543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.184 ms 00:25:53.472 [2024-11-25 10:31:00.480552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.472 [2024-11-25 10:31:00.516067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.472 [2024-11-25 10:31:00.516103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:53.472 [2024-11-25 10:31:00.516262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.461 ms 00:25:53.472 [2024-11-25 10:31:00.516279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.472 [2024-11-25 10:31:00.516371] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:53.472 [2024-11-25 10:31:00.516388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:53.472 [2024-11-25 10:31:00.516401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:53.472 [2024-11-25 10:31:00.516411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:53.472 [2024-11-25 10:31:00.516422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.516990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:53.473 [2024-11-25 10:31:00.517164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:53.474 [2024-11-25 10:31:00.517470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:53.474 [2024-11-25 10:31:00.517480] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7dfaaa21-aae5-4940-8ecb-2cbd33e49460 00:25:53.474 [2024-11-25 10:31:00.517499] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:53.474 [2024-11-25 10:31:00.517509] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:53.474 [2024-11-25 10:31:00.517519] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:53.474 [2024-11-25 10:31:00.517529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:53.474 [2024-11-25 10:31:00.517539] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:53.474 [2024-11-25 10:31:00.517548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:53.474 [2024-11-25 10:31:00.517563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:53.474 [2024-11-25 10:31:00.517572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:53.474 [2024-11-25 10:31:00.517581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:53.474 [2024-11-25 10:31:00.517590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.474 [2024-11-25 10:31:00.517600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:53.474 [2024-11-25 10:31:00.517610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:25:53.474 [2024-11-25 10:31:00.517623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.474 [2024-11-25 10:31:00.537371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.474 [2024-11-25 10:31:00.537407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:53.474 [2024-11-25 10:31:00.537420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.759 ms 00:25:53.474 [2024-11-25 10:31:00.537430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.474 [2024-11-25 10:31:00.537988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.474 [2024-11-25 10:31:00.538015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:53.474 [2024-11-25 10:31:00.538026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:25:53.474 [2024-11-25 10:31:00.538036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.734 [2024-11-25 10:31:00.592301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.734 [2024-11-25 10:31:00.592335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.734 [2024-11-25 10:31:00.592348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.734 [2024-11-25 10:31:00.592363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.734 [2024-11-25 10:31:00.592449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.734 [2024-11-25 10:31:00.592462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.734 [2024-11-25 10:31:00.592472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.734 [2024-11-25 10:31:00.592482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.734 [2024-11-25 10:31:00.592543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.734 [2024-11-25 10:31:00.592557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.734 [2024-11-25 10:31:00.592567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.734 [2024-11-25 10:31:00.592578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.734 [2024-11-25 10:31:00.592600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.734 [2024-11-25 10:31:00.592611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.734 [2024-11-25 10:31:00.592621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.734 [2024-11-25 10:31:00.592631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.734 [2024-11-25 10:31:00.718783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.734 [2024-11-25 10:31:00.718840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.734 [2024-11-25 10:31:00.718855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.734 [2024-11-25 10:31:00.718865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.734 [2024-11-25 10:31:00.820071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.734 [2024-11-25 10:31:00.820125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.734 [2024-11-25 10:31:00.820139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.734 [2024-11-25 10:31:00.820150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.734 [2024-11-25 10:31:00.820241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.735 [2024-11-25 10:31:00.820253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.735 [2024-11-25 10:31:00.820264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.735 [2024-11-25 10:31:00.820274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.735 [2024-11-25 10:31:00.820303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.735 [2024-11-25 10:31:00.820321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.735 [2024-11-25 10:31:00.820331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.735 [2024-11-25 10:31:00.820341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.735 [2024-11-25 10:31:00.820459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.735 [2024-11-25 10:31:00.820472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.735 [2024-11-25 10:31:00.820482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.735 [2024-11-25 10:31:00.820511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.735 [2024-11-25 10:31:00.820550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.735 [2024-11-25 10:31:00.820563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:53.735 [2024-11-25 10:31:00.820578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.735 [2024-11-25 10:31:00.820588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.735 [2024-11-25 10:31:00.820627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.735 [2024-11-25 10:31:00.820637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.735 [2024-11-25 10:31:00.820647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.735 [2024-11-25 10:31:00.820657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.735 [2024-11-25 10:31:00.820699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.735 [2024-11-25 10:31:00.820714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.735 [2024-11-25 10:31:00.820725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.735 [2024-11-25 10:31:00.820735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.735 [2024-11-25 10:31:00.820868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.100 ms, result 0 00:25:55.115 00:25:55.115 00:25:55.115 10:31:01 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78675 00:25:55.115 10:31:01 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:55.115 10:31:01 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78675 00:25:55.115 10:31:01 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78675 ']' 00:25:55.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.115 10:31:01 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.115 10:31:01 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.115 10:31:01 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.115 10:31:01 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.115 10:31:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:55.115 [2024-11-25 10:31:01.975548] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:25:55.115 [2024-11-25 10:31:01.975664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78675 ] 00:25:55.115 [2024-11-25 10:31:02.154153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.375 [2024-11-25 10:31:02.265504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.313 10:31:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.313 10:31:03 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:56.313 10:31:03 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:56.313 [2024-11-25 10:31:03.354833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.313 [2024-11-25 10:31:03.354894] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.573 [2024-11-25 10:31:03.527224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.527276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:56.573 [2024-11-25 10:31:03.527294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:56.573 [2024-11-25 10:31:03.527304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.530428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.530472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.573 [2024-11-25 10:31:03.530488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.106 ms 00:25:56.573 [2024-11-25 10:31:03.530508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.530622] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:56.573 [2024-11-25 10:31:03.531635] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:56.573 [2024-11-25 10:31:03.531829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.531847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.573 [2024-11-25 10:31:03.531861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:25:56.573 [2024-11-25 10:31:03.531872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.533360] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:56.573 [2024-11-25 10:31:03.552663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.552709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:56.573 [2024-11-25 10:31:03.552725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.338 ms 00:25:56.573 [2024-11-25 10:31:03.552738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.552836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.552853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:56.573 [2024-11-25 10:31:03.552865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:56.573 [2024-11-25 10:31:03.552877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.559688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.559726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.573 [2024-11-25 10:31:03.559738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.773 ms 00:25:56.573 [2024-11-25 10:31:03.559751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.559862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.559879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.573 [2024-11-25 10:31:03.559893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:56.573 [2024-11-25 10:31:03.559907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.559940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.559954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:56.573 [2024-11-25 10:31:03.559965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:56.573 [2024-11-25 10:31:03.559977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.560003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:56.573 [2024-11-25 10:31:03.564830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.564864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.573 [2024-11-25 10:31:03.564879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.838 ms 00:25:56.573 [2024-11-25 10:31:03.564890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.564964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.564976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:56.573 [2024-11-25 10:31:03.564992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:56.573 [2024-11-25 10:31:03.565003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.565027] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:56.573 [2024-11-25 10:31:03.565048] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:56.573 [2024-11-25 10:31:03.565093] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:56.573 [2024-11-25 10:31:03.565113] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:56.573 [2024-11-25 10:31:03.565203] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:56.573 [2024-11-25 10:31:03.565221] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:56.573 [2024-11-25 10:31:03.565237] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:56.573 [2024-11-25 10:31:03.565250] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:56.573 [2024-11-25 10:31:03.565265] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:56.573 [2024-11-25 10:31:03.565276] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:56.573 [2024-11-25 10:31:03.565298] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:56.573 [2024-11-25 10:31:03.565308] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:56.573 [2024-11-25 10:31:03.565323] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:56.573 [2024-11-25 10:31:03.565334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.565347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:56.573 [2024-11-25 10:31:03.565358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:25:56.573 [2024-11-25 10:31:03.565373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.565448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.573 [2024-11-25 10:31:03.565462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:56.573 [2024-11-25 10:31:03.565472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:56.573 [2024-11-25 10:31:03.565484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.573 [2024-11-25 10:31:03.565595] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:56.573 [2024-11-25 10:31:03.565611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:56.573 [2024-11-25 10:31:03.565622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.573 [2024-11-25 10:31:03.565635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.573 [2024-11-25 10:31:03.565647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:56.573 [2024-11-25 10:31:03.565659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:56.573 [2024-11-25 10:31:03.565669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:56.573 [2024-11-25 10:31:03.565685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:56.573 [2024-11-25 10:31:03.565695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:56.573 [2024-11-25 10:31:03.565706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.573 [2024-11-25 10:31:03.565716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:56.573 [2024-11-25 10:31:03.565727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:56.573 [2024-11-25 10:31:03.565737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.573 [2024-11-25 10:31:03.565751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:56.573 [2024-11-25 10:31:03.565761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:56.573 [2024-11-25 10:31:03.565773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.573 [2024-11-25 10:31:03.565782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:56.573 [2024-11-25 10:31:03.565794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:56.573 [2024-11-25 10:31:03.565819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.573 [2024-11-25 10:31:03.565838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:56.573 [2024-11-25 10:31:03.565849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:56.573 [2024-11-25 10:31:03.565867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.573 [2024-11-25 10:31:03.565883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:56.573 [2024-11-25 10:31:03.565905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:56.574 [2024-11-25 10:31:03.565915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.574 [2024-11-25 10:31:03.565927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:56.574 [2024-11-25 10:31:03.565936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:56.574 [2024-11-25 10:31:03.565948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.574 [2024-11-25 10:31:03.565957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:56.574 [2024-11-25 10:31:03.565988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:56.574 [2024-11-25 10:31:03.565997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.574 [2024-11-25 10:31:03.566011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:56.574 [2024-11-25 10:31:03.566020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:56.574 [2024-11-25 10:31:03.566032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.574 [2024-11-25 10:31:03.566041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:56.574 [2024-11-25 10:31:03.566053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:56.574 [2024-11-25 10:31:03.566062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.574 [2024-11-25 10:31:03.566074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:56.574 [2024-11-25 10:31:03.566083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:56.574 [2024-11-25 10:31:03.566097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.574 [2024-11-25 10:31:03.566111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:56.574 [2024-11-25 10:31:03.566131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:56.574 [2024-11-25 10:31:03.566145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.574 [2024-11-25 10:31:03.566164] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:56.574 [2024-11-25 10:31:03.566177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:56.574 [2024-11-25 10:31:03.566190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.574 [2024-11-25 10:31:03.566201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.574 [2024-11-25 10:31:03.566214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:56.574 [2024-11-25 10:31:03.566223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:56.574 [2024-11-25 10:31:03.566235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:56.574 [2024-11-25 10:31:03.566245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:56.574 [2024-11-25 10:31:03.566257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:56.574 [2024-11-25 10:31:03.566273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:56.574 [2024-11-25 10:31:03.566290] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:56.574 [2024-11-25 10:31:03.566303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.574 [2024-11-25 10:31:03.566320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:56.574 [2024-11-25 10:31:03.566331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:56.574 [2024-11-25 10:31:03.566345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:56.574 [2024-11-25 10:31:03.566356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:56.574 [2024-11-25 10:31:03.566369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:56.574 [2024-11-25 10:31:03.566380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:56.574 [2024-11-25 10:31:03.566397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:56.574 [2024-11-25 10:31:03.566415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:56.574 [2024-11-25 10:31:03.566431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:56.574 [2024-11-25 10:31:03.566443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:56.574 [2024-11-25 10:31:03.566456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:56.574 [2024-11-25 10:31:03.566466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:56.574 [2024-11-25 10:31:03.566479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:56.574 [2024-11-25 10:31:03.566502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:56.574 [2024-11-25 10:31:03.566517] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:56.574 [2024-11-25 10:31:03.566528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.574 [2024-11-25 10:31:03.566545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:56.574 [2024-11-25 10:31:03.566556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:56.574 [2024-11-25 10:31:03.566569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:56.574 [2024-11-25 10:31:03.566581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:56.574 [2024-11-25 10:31:03.566602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.574 [2024-11-25 10:31:03.566621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:56.574 [2024-11-25 10:31:03.566639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:25:56.574 [2024-11-25 10:31:03.566650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.574 [2024-11-25 10:31:03.606149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.574 [2024-11-25 10:31:03.606190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.574 [2024-11-25 10:31:03.606211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.495 ms 00:25:56.574 [2024-11-25 10:31:03.606222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.574 [2024-11-25 10:31:03.606350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.574 [2024-11-25 10:31:03.606364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:56.574 [2024-11-25 10:31:03.606377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:56.574 [2024-11-25 10:31:03.606388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.574 [2024-11-25 10:31:03.652744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.574 [2024-11-25 10:31:03.652782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.574 [2024-11-25 10:31:03.652799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.403 ms 00:25:56.574 [2024-11-25 10:31:03.652809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.574 [2024-11-25 10:31:03.652909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.574 [2024-11-25 10:31:03.652922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.574 [2024-11-25 10:31:03.652936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:56.574 [2024-11-25 10:31:03.652946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.574 [2024-11-25 10:31:03.653395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.574 [2024-11-25 10:31:03.653409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.574 [2024-11-25 10:31:03.653424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:25:56.574 [2024-11-25 10:31:03.653435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.574 [2024-11-25 10:31:03.653572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.574 [2024-11-25 10:31:03.653587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.574 [2024-11-25 10:31:03.653603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:56.574 [2024-11-25 10:31:03.653614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.574 [2024-11-25 10:31:03.673726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.574 [2024-11-25 10:31:03.673764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.574 [2024-11-25 10:31:03.673784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.113 ms 00:25:56.574 [2024-11-25 10:31:03.673795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.834 [2024-11-25 10:31:03.693319] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:56.834 [2024-11-25 10:31:03.693360] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:56.834 [2024-11-25 10:31:03.693381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.834 [2024-11-25 10:31:03.693392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:56.834 [2024-11-25 10:31:03.693406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.491 ms 00:25:56.834 [2024-11-25 10:31:03.693426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.834 [2024-11-25 10:31:03.722786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.834 [2024-11-25 10:31:03.722956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:56.834 [2024-11-25 10:31:03.722985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.305 ms 00:25:56.834 [2024-11-25 10:31:03.722999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.834 [2024-11-25 10:31:03.741679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.834 [2024-11-25 10:31:03.741835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:56.834 [2024-11-25 10:31:03.741864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.602 ms 00:25:56.834 [2024-11-25 10:31:03.741875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.834 [2024-11-25 10:31:03.759484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.834 [2024-11-25 10:31:03.759528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:56.834 [2024-11-25 10:31:03.759544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.558 ms 00:25:56.834 [2024-11-25 10:31:03.759554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.834 [2024-11-25 10:31:03.760331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.834 [2024-11-25 10:31:03.760361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:56.834 [2024-11-25 10:31:03.760376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:25:56.834 [2024-11-25 10:31:03.760386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.834 [2024-11-25 10:31:03.856658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.834 [2024-11-25 10:31:03.856725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:56.834 [2024-11-25 10:31:03.856747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.389 ms 00:25:56.834 [2024-11-25 10:31:03.856758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.834 [2024-11-25 10:31:03.867812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:56.834 [2024-11-25 10:31:03.884019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.834 [2024-11-25 10:31:03.884096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:56.834 [2024-11-25 10:31:03.884112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.205 ms 00:25:56.835 [2024-11-25 10:31:03.884128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-25 10:31:03.884255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-25 10:31:03.884275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:56.835 [2024-11-25 10:31:03.884287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:56.835 [2024-11-25 10:31:03.884302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-25 10:31:03.884357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-25 10:31:03.884374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:56.835 [2024-11-25 10:31:03.884390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:56.835 [2024-11-25 10:31:03.884405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-25 10:31:03.884430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-25 10:31:03.884447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:56.835 [2024-11-25 10:31:03.884458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:56.835 [2024-11-25 10:31:03.884475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-25 10:31:03.884538] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:56.835 [2024-11-25 10:31:03.884568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-25 10:31:03.884579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:56.835 [2024-11-25 10:31:03.884594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:56.835 [2024-11-25 10:31:03.884611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-25 10:31:03.921639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-25 10:31:03.921682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:56.835 [2024-11-25 10:31:03.921699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.059 ms 00:25:56.835 [2024-11-25 10:31:03.921710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-25 10:31:03.921823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.835 [2024-11-25 10:31:03.921837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:56.835 [2024-11-25 10:31:03.921854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:56.835 [2024-11-25 10:31:03.921864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.835 [2024-11-25 10:31:03.922749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:56.835 [2024-11-25 10:31:03.927056] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.851 ms, result 0 00:25:56.835 [2024-11-25 10:31:03.928462] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:57.094 Some configs were skipped because the RPC state that can call them passed over. 00:25:57.094 10:31:03 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:57.094 [2024-11-25 10:31:04.164470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.094 [2024-11-25 10:31:04.164539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:57.094 [2024-11-25 10:31:04.164556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:25:57.094 [2024-11-25 10:31:04.164570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.094 [2024-11-25 10:31:04.164611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.823 ms, result 0 00:25:57.094 true 00:25:57.094 10:31:04 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:57.394 [2024-11-25 10:31:04.347846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.394 [2024-11-25 10:31:04.348053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:57.394 [2024-11-25 10:31:04.348087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:25:57.394 [2024-11-25 10:31:04.348099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.394 [2024-11-25 10:31:04.348156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.493 ms, result 0 00:25:57.394 true 00:25:57.394 10:31:04 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78675 00:25:57.394 10:31:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78675 ']' 00:25:57.394 10:31:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78675 00:25:57.394 10:31:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:57.394 10:31:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.394 10:31:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78675 00:25:57.394 10:31:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:57.394 10:31:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:57.394 10:31:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78675' 00:25:57.394 killing process with pid 78675 00:25:57.394 10:31:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78675 00:25:57.394 10:31:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78675 00:25:58.784 [2024-11-25 10:31:05.553996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.554291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:58.784 [2024-11-25 10:31:05.554460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:58.784 [2024-11-25 10:31:05.554489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.554538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:58.784 [2024-11-25 10:31:05.558890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.558926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:58.784 [2024-11-25 10:31:05.558945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.335 ms 00:25:58.784 [2024-11-25 10:31:05.558955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.559217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.559231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:58.784 [2024-11-25 10:31:05.559244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:25:58.784 [2024-11-25 10:31:05.559254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.562648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.562688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:58.784 [2024-11-25 10:31:05.562703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.376 ms 00:25:58.784 [2024-11-25 10:31:05.562714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.568349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.568389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:58.784 [2024-11-25 10:31:05.568404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.604 ms 00:25:58.784 [2024-11-25 10:31:05.568415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.582910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.582958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:58.784 [2024-11-25 10:31:05.582977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.454 ms 00:25:58.784 [2024-11-25 10:31:05.582987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.593057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.593097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:58.784 [2024-11-25 10:31:05.593114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.015 ms 00:25:58.784 [2024-11-25 10:31:05.593125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.593267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.593282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:58.784 [2024-11-25 10:31:05.593303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:58.784 [2024-11-25 10:31:05.593313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.608992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.609029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:58.784 [2024-11-25 10:31:05.609044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.680 ms 00:25:58.784 [2024-11-25 10:31:05.609054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.623815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.623978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:58.784 [2024-11-25 10:31:05.624016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.729 ms 00:25:58.784 [2024-11-25 10:31:05.624027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.638601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.638759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:58.784 [2024-11-25 10:31:05.638789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.503 ms 00:25:58.784 [2024-11-25 10:31:05.638799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.653410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.784 [2024-11-25 10:31:05.653447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:58.784 [2024-11-25 10:31:05.653470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.550 ms 00:25:58.784 [2024-11-25 10:31:05.653480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.784 [2024-11-25 10:31:05.653550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:58.784 [2024-11-25 10:31:05.653568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:58.784 [2024-11-25 10:31:05.653829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.653996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.654988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:58.785 [2024-11-25 10:31:05.655022] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:58.785 [2024-11-25 10:31:05.655051] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7dfaaa21-aae5-4940-8ecb-2cbd33e49460 00:25:58.785 [2024-11-25 10:31:05.655068] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:58.785 [2024-11-25 10:31:05.655081] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:58.786 [2024-11-25 10:31:05.655091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:58.786 [2024-11-25 10:31:05.655104] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:58.786 [2024-11-25 10:31:05.655114] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:58.786 [2024-11-25 10:31:05.655127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:58.786 [2024-11-25 10:31:05.655137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:58.786 [2024-11-25 10:31:05.655149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:58.786 [2024-11-25 10:31:05.655158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:58.786 [2024-11-25 10:31:05.655171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.786 [2024-11-25 10:31:05.655181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:58.786 [2024-11-25 10:31:05.655195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.630 ms 00:25:58.786 [2024-11-25 10:31:05.655207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.786 [2024-11-25 10:31:05.674946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.786 [2024-11-25 10:31:05.674983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:58.786 [2024-11-25 10:31:05.675001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.741 ms 00:25:58.786 [2024-11-25 10:31:05.675012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.786 [2024-11-25 10:31:05.675608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:58.786 [2024-11-25 10:31:05.675635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:58.786 [2024-11-25 10:31:05.675652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:25:58.786 [2024-11-25 10:31:05.675662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.786 [2024-11-25 10:31:05.744138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.786 [2024-11-25 10:31:05.744176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:58.786 [2024-11-25 10:31:05.744192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.786 [2024-11-25 10:31:05.744203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.786 [2024-11-25 10:31:05.744289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.786 [2024-11-25 10:31:05.744304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:58.786 [2024-11-25 10:31:05.744317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.786 [2024-11-25 10:31:05.744328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.786 [2024-11-25 10:31:05.744379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.786 [2024-11-25 10:31:05.744391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:58.786 [2024-11-25 10:31:05.744407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.786 [2024-11-25 10:31:05.744417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.786 [2024-11-25 10:31:05.744438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.786 [2024-11-25 10:31:05.744449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:58.786 [2024-11-25 10:31:05.744462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.786 [2024-11-25 10:31:05.744474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.786 [2024-11-25 10:31:05.869736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.786 [2024-11-25 10:31:05.869991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:58.786 [2024-11-25 10:31:05.870024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.786 [2024-11-25 10:31:05.870035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.045 [2024-11-25 10:31:05.970622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.045 [2024-11-25 10:31:05.970671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:59.045 [2024-11-25 10:31:05.970697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.045 [2024-11-25 10:31:05.970708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.045 [2024-11-25 10:31:05.970828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.045 [2024-11-25 10:31:05.970843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:59.045 [2024-11-25 10:31:05.970862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.045 [2024-11-25 10:31:05.970873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.045 [2024-11-25 10:31:05.970908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.045 [2024-11-25 10:31:05.970919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:59.045 [2024-11-25 10:31:05.970935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.045 [2024-11-25 10:31:05.970945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.045 [2024-11-25 10:31:05.971075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.045 [2024-11-25 10:31:05.971088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:59.045 [2024-11-25 10:31:05.971103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.045 [2024-11-25 10:31:05.971114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.045 [2024-11-25 10:31:05.971157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.045 [2024-11-25 10:31:05.971170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:59.045 [2024-11-25 10:31:05.971185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.045 [2024-11-25 10:31:05.971205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.045 [2024-11-25 10:31:05.971256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.045 [2024-11-25 10:31:05.971267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:59.045 [2024-11-25 10:31:05.971288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.045 [2024-11-25 10:31:05.971299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.045 [2024-11-25 10:31:05.971346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.045 [2024-11-25 10:31:05.971359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:59.045 [2024-11-25 10:31:05.971375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.045 [2024-11-25 10:31:05.971385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.045 [2024-11-25 10:31:05.971557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 418.183 ms, result 0 00:25:59.984 10:31:06 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:59.984 [2024-11-25 10:31:07.053594] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:25:59.984 [2024-11-25 10:31:07.053721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78739 ] 00:26:00.244 [2024-11-25 10:31:07.235266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.244 [2024-11-25 10:31:07.352019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.812 [2024-11-25 10:31:07.708618] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:00.812 [2024-11-25 10:31:07.708690] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:00.812 [2024-11-25 10:31:07.870389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.870458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:00.812 [2024-11-25 10:31:07.870474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:00.812 [2024-11-25 10:31:07.870486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.873646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.873690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:00.812 [2024-11-25 10:31:07.873703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.122 ms 00:26:00.812 [2024-11-25 10:31:07.873713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.873853] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:00.812 [2024-11-25 10:31:07.874880] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:00.812 [2024-11-25 10:31:07.874919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.874931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:00.812 [2024-11-25 10:31:07.874943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:26:00.812 [2024-11-25 10:31:07.874953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.876545] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:00.812 [2024-11-25 10:31:07.896015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.896056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:00.812 [2024-11-25 10:31:07.896071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.503 ms 00:26:00.812 [2024-11-25 10:31:07.896082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.896185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.896200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:00.812 [2024-11-25 10:31:07.896211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:00.812 [2024-11-25 10:31:07.896222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.902905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.903075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:00.812 [2024-11-25 10:31:07.903096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.650 ms 00:26:00.812 [2024-11-25 10:31:07.903108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.903218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.903233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:00.812 [2024-11-25 10:31:07.903244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:00.812 [2024-11-25 10:31:07.903254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.903287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.903299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:00.812 [2024-11-25 10:31:07.903309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:00.812 [2024-11-25 10:31:07.903320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.903343] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:00.812 [2024-11-25 10:31:07.908133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.908168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:00.812 [2024-11-25 10:31:07.908180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.804 ms 00:26:00.812 [2024-11-25 10:31:07.908190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.908260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.908273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:00.812 [2024-11-25 10:31:07.908284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:00.812 [2024-11-25 10:31:07.908294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.908321] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:00.812 [2024-11-25 10:31:07.908342] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:00.812 [2024-11-25 10:31:07.908377] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:00.812 [2024-11-25 10:31:07.908395] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:00.812 [2024-11-25 10:31:07.908482] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:00.812 [2024-11-25 10:31:07.908514] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:00.812 [2024-11-25 10:31:07.908527] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:00.812 [2024-11-25 10:31:07.908545] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:00.812 [2024-11-25 10:31:07.908557] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:00.812 [2024-11-25 10:31:07.908568] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:00.812 [2024-11-25 10:31:07.908578] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:00.812 [2024-11-25 10:31:07.908588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:00.812 [2024-11-25 10:31:07.908597] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:00.812 [2024-11-25 10:31:07.908608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.908618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:00.812 [2024-11-25 10:31:07.908629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:26:00.812 [2024-11-25 10:31:07.908639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.908715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.812 [2024-11-25 10:31:07.908729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:00.812 [2024-11-25 10:31:07.908740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:00.812 [2024-11-25 10:31:07.908749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.812 [2024-11-25 10:31:07.908839] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:00.812 [2024-11-25 10:31:07.908852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:00.812 [2024-11-25 10:31:07.908863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.812 [2024-11-25 10:31:07.908873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.812 [2024-11-25 10:31:07.908884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:00.812 [2024-11-25 10:31:07.908893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:00.812 [2024-11-25 10:31:07.908903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:00.812 [2024-11-25 10:31:07.908913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:00.812 [2024-11-25 10:31:07.908922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:00.812 [2024-11-25 10:31:07.908931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.812 [2024-11-25 10:31:07.908941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:00.812 [2024-11-25 10:31:07.908962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:00.812 [2024-11-25 10:31:07.908971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.812 [2024-11-25 10:31:07.908981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:00.812 [2024-11-25 10:31:07.908991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:00.812 [2024-11-25 10:31:07.909000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.812 [2024-11-25 10:31:07.909009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:00.812 [2024-11-25 10:31:07.909018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:00.812 [2024-11-25 10:31:07.909027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.812 [2024-11-25 10:31:07.909037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:00.813 [2024-11-25 10:31:07.909046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:00.813 [2024-11-25 10:31:07.909055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.813 [2024-11-25 10:31:07.909064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:00.813 [2024-11-25 10:31:07.909073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:00.813 [2024-11-25 10:31:07.909082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.813 [2024-11-25 10:31:07.909092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:00.813 [2024-11-25 10:31:07.909101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:00.813 [2024-11-25 10:31:07.909109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.813 [2024-11-25 10:31:07.909118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:00.813 [2024-11-25 10:31:07.909127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:00.813 [2024-11-25 10:31:07.909135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.813 [2024-11-25 10:31:07.909144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:00.813 [2024-11-25 10:31:07.909153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:00.813 [2024-11-25 10:31:07.909162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.813 [2024-11-25 10:31:07.909171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:00.813 [2024-11-25 10:31:07.909180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:00.813 [2024-11-25 10:31:07.909188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.813 [2024-11-25 10:31:07.909197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:00.813 [2024-11-25 10:31:07.909206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:00.813 [2024-11-25 10:31:07.909215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.813 [2024-11-25 10:31:07.909223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:00.813 [2024-11-25 10:31:07.909233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:00.813 [2024-11-25 10:31:07.909244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.813 [2024-11-25 10:31:07.909254] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:00.813 [2024-11-25 10:31:07.909263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:00.813 [2024-11-25 10:31:07.909276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.813 [2024-11-25 10:31:07.909295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.813 [2024-11-25 10:31:07.909306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:00.813 [2024-11-25 10:31:07.909315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:00.813 [2024-11-25 10:31:07.909324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:00.813 [2024-11-25 10:31:07.909333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:00.813 [2024-11-25 10:31:07.909343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:00.813 [2024-11-25 10:31:07.909352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:00.813 [2024-11-25 10:31:07.909363] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:00.813 [2024-11-25 10:31:07.909374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.813 [2024-11-25 10:31:07.909386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:00.813 [2024-11-25 10:31:07.909396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:00.813 [2024-11-25 10:31:07.909407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:00.813 [2024-11-25 10:31:07.909417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:00.813 [2024-11-25 10:31:07.909427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:00.813 [2024-11-25 10:31:07.909437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:00.813 [2024-11-25 10:31:07.909447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:00.813 [2024-11-25 10:31:07.909457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:00.813 [2024-11-25 10:31:07.909468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:00.813 [2024-11-25 10:31:07.909478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:00.813 [2024-11-25 10:31:07.909488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:00.813 [2024-11-25 10:31:07.909508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:00.813 [2024-11-25 10:31:07.909519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:00.813 [2024-11-25 10:31:07.909530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:00.813 [2024-11-25 10:31:07.909540] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:00.813 [2024-11-25 10:31:07.909551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.813 [2024-11-25 10:31:07.909563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:00.813 [2024-11-25 10:31:07.909574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:00.813 [2024-11-25 10:31:07.909584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:00.813 [2024-11-25 10:31:07.909595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:00.813 [2024-11-25 10:31:07.909606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.813 [2024-11-25 10:31:07.909621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:00.813 [2024-11-25 10:31:07.909631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:26:00.813 [2024-11-25 10:31:07.909641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.072 [2024-11-25 10:31:07.949177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.072 [2024-11-25 10:31:07.949362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:01.072 [2024-11-25 10:31:07.949518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.542 ms 00:26:01.072 [2024-11-25 10:31:07.949565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.072 [2024-11-25 10:31:07.949717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.072 [2024-11-25 10:31:07.949866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:01.072 [2024-11-25 10:31:07.949909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:01.072 [2024-11-25 10:31:07.949939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.072 [2024-11-25 10:31:08.012731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.072 [2024-11-25 10:31:08.012902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:01.072 [2024-11-25 10:31:08.013063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.842 ms 00:26:01.072 [2024-11-25 10:31:08.013106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.072 [2024-11-25 10:31:08.013222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.072 [2024-11-25 10:31:08.013434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:01.072 [2024-11-25 10:31:08.013480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:01.072 [2024-11-25 10:31:08.013533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.072 [2024-11-25 10:31:08.014021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.072 [2024-11-25 10:31:08.014137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:01.072 [2024-11-25 10:31:08.014218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:26:01.072 [2024-11-25 10:31:08.014268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.072 [2024-11-25 10:31:08.014476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.072 [2024-11-25 10:31:08.014600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:01.072 [2024-11-25 10:31:08.014689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:01.072 [2024-11-25 10:31:08.014728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.073 [2024-11-25 10:31:08.036009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.073 [2024-11-25 10:31:08.036159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:01.073 [2024-11-25 10:31:08.036293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.263 ms 00:26:01.073 [2024-11-25 10:31:08.036336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.073 [2024-11-25 10:31:08.055411] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:01.073 [2024-11-25 10:31:08.055604] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:01.073 [2024-11-25 10:31:08.055724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.073 [2024-11-25 10:31:08.055760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:01.073 [2024-11-25 10:31:08.055791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.290 ms 00:26:01.073 [2024-11-25 10:31:08.055874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.073 [2024-11-25 10:31:08.085459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.073 [2024-11-25 10:31:08.085631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:01.073 [2024-11-25 10:31:08.085724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.514 ms 00:26:01.073 [2024-11-25 10:31:08.085743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.073 [2024-11-25 10:31:08.104047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.073 [2024-11-25 10:31:08.104086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:01.073 [2024-11-25 10:31:08.104100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.250 ms 00:26:01.073 [2024-11-25 10:31:08.104109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.073 [2024-11-25 10:31:08.121996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.073 [2024-11-25 10:31:08.122035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:01.073 [2024-11-25 10:31:08.122048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.838 ms 00:26:01.073 [2024-11-25 10:31:08.122058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.073 [2024-11-25 10:31:08.122797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.073 [2024-11-25 10:31:08.122823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:01.073 [2024-11-25 10:31:08.122835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:26:01.073 [2024-11-25 10:31:08.122845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.333 [2024-11-25 10:31:08.207944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.333 [2024-11-25 10:31:08.208004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:01.333 [2024-11-25 10:31:08.208020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.209 ms 00:26:01.333 [2024-11-25 10:31:08.208031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.333 [2024-11-25 10:31:08.218729] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:01.333 [2024-11-25 10:31:08.235132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.333 [2024-11-25 10:31:08.235179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:01.333 [2024-11-25 10:31:08.235195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.035 ms 00:26:01.333 [2024-11-25 10:31:08.235211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.333 [2024-11-25 10:31:08.235342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.333 [2024-11-25 10:31:08.235374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:01.333 [2024-11-25 10:31:08.235387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:01.333 [2024-11-25 10:31:08.235396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.333 [2024-11-25 10:31:08.235454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.333 [2024-11-25 10:31:08.235465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:01.333 [2024-11-25 10:31:08.235476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:01.333 [2024-11-25 10:31:08.235510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.333 [2024-11-25 10:31:08.235541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.333 [2024-11-25 10:31:08.235552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:01.333 [2024-11-25 10:31:08.235562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:01.333 [2024-11-25 10:31:08.235573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.333 [2024-11-25 10:31:08.235620] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:01.333 [2024-11-25 10:31:08.235639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.333 [2024-11-25 10:31:08.235656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:01.333 [2024-11-25 10:31:08.235668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:01.333 [2024-11-25 10:31:08.235678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.333 [2024-11-25 10:31:08.271735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.333 [2024-11-25 10:31:08.271780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:01.333 [2024-11-25 10:31:08.271795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.089 ms 00:26:01.333 [2024-11-25 10:31:08.271806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.333 [2024-11-25 10:31:08.271923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.333 [2024-11-25 10:31:08.271937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:01.333 [2024-11-25 10:31:08.271949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:01.333 [2024-11-25 10:31:08.271959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.333 [2024-11-25 10:31:08.272822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:01.333 [2024-11-25 10:31:08.277034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 402.791 ms, result 0 00:26:01.333 [2024-11-25 10:31:08.277763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:01.333 [2024-11-25 10:31:08.296072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:02.269  [2024-11-25T10:31:10.761Z] Copying: 30/256 [MB] (30 MBps) [2024-11-25T10:31:11.699Z] Copying: 57/256 [MB] (26 MBps) [2024-11-25T10:31:12.637Z] Copying: 83/256 [MB] (25 MBps) [2024-11-25T10:31:13.576Z] Copying: 110/256 [MB] (26 MBps) [2024-11-25T10:31:14.513Z] Copying: 137/256 [MB] (26 MBps) [2024-11-25T10:31:15.452Z] Copying: 163/256 [MB] (26 MBps) [2024-11-25T10:31:16.388Z] Copying: 188/256 [MB] (25 MBps) [2024-11-25T10:31:17.762Z] Copying: 213/256 [MB] (24 MBps) [2024-11-25T10:31:18.021Z] Copying: 238/256 [MB] (25 MBps) [2024-11-25T10:31:18.589Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-25 10:31:18.308964] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:11.477 [2024-11-25 10:31:18.329171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.477 [2024-11-25 10:31:18.329250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:11.477 [2024-11-25 10:31:18.329266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:11.477 [2024-11-25 10:31:18.329287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.477 [2024-11-25 10:31:18.329328] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:11.477 [2024-11-25 10:31:18.333378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.477 [2024-11-25 10:31:18.333414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:11.477 [2024-11-25 10:31:18.333428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.037 ms 00:26:11.477 [2024-11-25 10:31:18.333438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.477 [2024-11-25 10:31:18.333721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.477 [2024-11-25 10:31:18.333740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:11.477 [2024-11-25 10:31:18.333752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:26:11.477 [2024-11-25 10:31:18.333762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.477 [2024-11-25 10:31:18.336935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.477 [2024-11-25 10:31:18.337109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:11.477 [2024-11-25 10:31:18.337134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.160 ms 00:26:11.477 [2024-11-25 10:31:18.337144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.477 [2024-11-25 10:31:18.342842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.477 [2024-11-25 10:31:18.342881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:11.477 [2024-11-25 10:31:18.342894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.671 ms 00:26:11.477 [2024-11-25 10:31:18.342905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.477 [2024-11-25 10:31:18.380812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.477 [2024-11-25 10:31:18.380878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:11.477 [2024-11-25 10:31:18.380895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.875 ms 00:26:11.477 [2024-11-25 10:31:18.380906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.477 [2024-11-25 10:31:18.403116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.477 [2024-11-25 10:31:18.403182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:11.478 [2024-11-25 10:31:18.403208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.148 ms 00:26:11.478 [2024-11-25 10:31:18.403218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.478 [2024-11-25 10:31:18.403389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.478 [2024-11-25 10:31:18.403404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:11.478 [2024-11-25 10:31:18.403428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:26:11.478 [2024-11-25 10:31:18.403438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.478 [2024-11-25 10:31:18.442351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.478 [2024-11-25 10:31:18.442418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:11.478 [2024-11-25 10:31:18.442434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.954 ms 00:26:11.478 [2024-11-25 10:31:18.442444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.478 [2024-11-25 10:31:18.479458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.478 [2024-11-25 10:31:18.479533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:11.478 [2024-11-25 10:31:18.479549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.961 ms 00:26:11.478 [2024-11-25 10:31:18.479559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.478 [2024-11-25 10:31:18.516660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.478 [2024-11-25 10:31:18.516724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:11.478 [2024-11-25 10:31:18.516740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.070 ms 00:26:11.478 [2024-11-25 10:31:18.516750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.478 [2024-11-25 10:31:18.553794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.478 [2024-11-25 10:31:18.554038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:11.478 [2024-11-25 10:31:18.554066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.958 ms 00:26:11.478 [2024-11-25 10:31:18.554078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.478 [2024-11-25 10:31:18.554163] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:11.478 [2024-11-25 10:31:18.554181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:11.478 [2024-11-25 10:31:18.554789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.554999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:11.479 [2024-11-25 10:31:18.555338] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:11.479 [2024-11-25 10:31:18.555349] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7dfaaa21-aae5-4940-8ecb-2cbd33e49460 00:26:11.479 [2024-11-25 10:31:18.555360] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:11.479 [2024-11-25 10:31:18.555370] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:11.479 [2024-11-25 10:31:18.555380] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:11.479 [2024-11-25 10:31:18.555390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:11.479 [2024-11-25 10:31:18.555400] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:11.479 [2024-11-25 10:31:18.555410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:11.479 [2024-11-25 10:31:18.555420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:11.479 [2024-11-25 10:31:18.555429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:11.479 [2024-11-25 10:31:18.555437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:11.479 [2024-11-25 10:31:18.555448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.479 [2024-11-25 10:31:18.555463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:11.479 [2024-11-25 10:31:18.555480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:26:11.479 [2024-11-25 10:31:18.555510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.479 [2024-11-25 10:31:18.576382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.479 [2024-11-25 10:31:18.576433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:11.479 [2024-11-25 10:31:18.576449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.865 ms 00:26:11.479 [2024-11-25 10:31:18.576460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.479 [2024-11-25 10:31:18.577086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.479 [2024-11-25 10:31:18.577246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:11.479 [2024-11-25 10:31:18.577270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:26:11.479 [2024-11-25 10:31:18.577281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.738 [2024-11-25 10:31:18.632939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.738 [2024-11-25 10:31:18.633005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:11.738 [2024-11-25 10:31:18.633021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.738 [2024-11-25 10:31:18.633031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.738 [2024-11-25 10:31:18.633180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.738 [2024-11-25 10:31:18.633193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:11.738 [2024-11-25 10:31:18.633204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.738 [2024-11-25 10:31:18.633214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.739 [2024-11-25 10:31:18.633275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.739 [2024-11-25 10:31:18.633288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:11.739 [2024-11-25 10:31:18.633309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.739 [2024-11-25 10:31:18.633319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.739 [2024-11-25 10:31:18.633342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.739 [2024-11-25 10:31:18.633353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:11.739 [2024-11-25 10:31:18.633364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.739 [2024-11-25 10:31:18.633374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.739 [2024-11-25 10:31:18.757260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.739 [2024-11-25 10:31:18.757332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:11.739 [2024-11-25 10:31:18.757348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.739 [2024-11-25 10:31:18.757358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.998 [2024-11-25 10:31:18.859371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.998 [2024-11-25 10:31:18.859425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:11.998 [2024-11-25 10:31:18.859439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.998 [2024-11-25 10:31:18.859450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.998 [2024-11-25 10:31:18.859564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.998 [2024-11-25 10:31:18.859576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:11.998 [2024-11-25 10:31:18.859588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.998 [2024-11-25 10:31:18.859598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.998 [2024-11-25 10:31:18.859639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.998 [2024-11-25 10:31:18.859649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:11.998 [2024-11-25 10:31:18.859666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.998 [2024-11-25 10:31:18.859676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.998 [2024-11-25 10:31:18.859795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.998 [2024-11-25 10:31:18.859808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:11.998 [2024-11-25 10:31:18.859818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.998 [2024-11-25 10:31:18.859828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.998 [2024-11-25 10:31:18.859865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.998 [2024-11-25 10:31:18.859876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:11.998 [2024-11-25 10:31:18.859891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.998 [2024-11-25 10:31:18.859901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.998 [2024-11-25 10:31:18.859939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.998 [2024-11-25 10:31:18.859950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:11.998 [2024-11-25 10:31:18.859960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.998 [2024-11-25 10:31:18.859977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.998 [2024-11-25 10:31:18.860036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:11.998 [2024-11-25 10:31:18.860052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:11.998 [2024-11-25 10:31:18.860066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:11.998 [2024-11-25 10:31:18.860076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.998 [2024-11-25 10:31:18.860222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.912 ms, result 0 00:26:12.932 00:26:12.932 00:26:12.932 10:31:19 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:13.498 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:26:13.498 10:31:20 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:26:13.498 10:31:20 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:26:13.498 10:31:20 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:13.498 10:31:20 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:13.498 10:31:20 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:26:13.498 10:31:20 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:13.498 10:31:20 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78675 00:26:13.498 Process with pid 78675 is not found 00:26:13.498 10:31:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78675 ']' 00:26:13.498 10:31:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78675 00:26:13.498 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78675) - No such process 00:26:13.498 10:31:20 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78675 is not found' 00:26:13.498 00:26:13.498 real 1m11.191s 00:26:13.498 user 1m43.310s 00:26:13.498 sys 0m6.685s 00:26:13.498 10:31:20 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.498 10:31:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:13.498 ************************************ 00:26:13.498 END TEST ftl_trim 00:26:13.498 ************************************ 00:26:13.498 10:31:20 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:13.498 10:31:20 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:13.498 10:31:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.498 10:31:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:13.498 ************************************ 00:26:13.498 START TEST ftl_restore 00:26:13.498 ************************************ 00:26:13.498 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:13.799 * Looking for test storage... 00:26:13.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:13.799 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:13.799 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:26:13.799 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:13.799 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:13.799 10:31:20 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:26:13.799 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.799 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:13.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.799 --rc genhtml_branch_coverage=1 00:26:13.799 --rc genhtml_function_coverage=1 00:26:13.799 --rc genhtml_legend=1 00:26:13.799 --rc geninfo_all_blocks=1 00:26:13.799 --rc geninfo_unexecuted_blocks=1 00:26:13.799 00:26:13.799 ' 00:26:13.799 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:13.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.799 --rc genhtml_branch_coverage=1 00:26:13.799 --rc genhtml_function_coverage=1 00:26:13.799 --rc genhtml_legend=1 00:26:13.799 --rc geninfo_all_blocks=1 00:26:13.800 --rc geninfo_unexecuted_blocks=1 00:26:13.800 00:26:13.800 ' 00:26:13.800 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:13.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.800 --rc genhtml_branch_coverage=1 00:26:13.800 --rc genhtml_function_coverage=1 00:26:13.800 --rc genhtml_legend=1 00:26:13.800 --rc geninfo_all_blocks=1 00:26:13.800 --rc geninfo_unexecuted_blocks=1 00:26:13.800 00:26:13.800 ' 00:26:13.800 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:13.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.800 --rc genhtml_branch_coverage=1 00:26:13.800 --rc genhtml_function_coverage=1 00:26:13.800 --rc genhtml_legend=1 00:26:13.800 --rc geninfo_all_blocks=1 00:26:13.800 --rc geninfo_unexecuted_blocks=1 00:26:13.800 00:26:13.800 ' 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.6OA3hbCY7S 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=78949 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 78949 00:26:13.800 10:31:20 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:13.800 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 78949 ']' 00:26:13.800 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.800 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.800 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.800 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.800 10:31:20 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:14.100 [2024-11-25 10:31:20.928567] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:26:14.100 [2024-11-25 10:31:20.928693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78949 ] 00:26:14.100 [2024-11-25 10:31:21.111439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.356 [2024-11-25 10:31:21.219883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.290 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.290 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:26:15.290 10:31:22 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:15.290 10:31:22 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:26:15.290 10:31:22 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:15.290 10:31:22 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:26:15.291 10:31:22 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:26:15.291 10:31:22 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:15.549 10:31:22 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:15.549 10:31:22 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:26:15.549 10:31:22 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:15.549 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:15.549 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:15.549 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:15.549 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:15.549 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:15.549 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:15.549 { 00:26:15.549 "name": "nvme0n1", 00:26:15.549 "aliases": [ 00:26:15.549 "20515b80-5c21-4494-93f8-b728099c62df" 00:26:15.549 ], 00:26:15.549 "product_name": "NVMe disk", 00:26:15.549 "block_size": 4096, 00:26:15.549 "num_blocks": 1310720, 00:26:15.549 "uuid": "20515b80-5c21-4494-93f8-b728099c62df", 00:26:15.549 "numa_id": -1, 00:26:15.549 "assigned_rate_limits": { 00:26:15.549 "rw_ios_per_sec": 0, 00:26:15.549 "rw_mbytes_per_sec": 0, 00:26:15.549 "r_mbytes_per_sec": 0, 00:26:15.549 "w_mbytes_per_sec": 0 00:26:15.549 }, 00:26:15.549 "claimed": true, 00:26:15.549 "claim_type": "read_many_write_one", 00:26:15.549 "zoned": false, 00:26:15.549 "supported_io_types": { 00:26:15.549 "read": true, 00:26:15.549 "write": true, 00:26:15.549 "unmap": true, 00:26:15.549 "flush": true, 00:26:15.549 "reset": true, 00:26:15.549 "nvme_admin": true, 00:26:15.549 "nvme_io": true, 00:26:15.549 "nvme_io_md": false, 00:26:15.549 "write_zeroes": true, 00:26:15.549 "zcopy": false, 00:26:15.549 "get_zone_info": false, 00:26:15.549 "zone_management": false, 00:26:15.549 "zone_append": false, 00:26:15.549 "compare": true, 00:26:15.549 "compare_and_write": false, 00:26:15.549 "abort": true, 00:26:15.549 "seek_hole": false, 00:26:15.549 "seek_data": false, 00:26:15.549 "copy": true, 00:26:15.549 "nvme_iov_md": false 00:26:15.549 }, 00:26:15.549 "driver_specific": { 00:26:15.549 "nvme": [ 00:26:15.549 { 00:26:15.549 "pci_address": "0000:00:11.0", 00:26:15.549 "trid": { 00:26:15.549 "trtype": "PCIe", 00:26:15.549 "traddr": "0000:00:11.0" 00:26:15.549 }, 00:26:15.549 "ctrlr_data": { 00:26:15.549 "cntlid": 0, 00:26:15.549 "vendor_id": "0x1b36", 00:26:15.549 "model_number": "QEMU NVMe Ctrl", 00:26:15.549 "serial_number": "12341", 00:26:15.549 "firmware_revision": "8.0.0", 00:26:15.549 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:15.549 "oacs": { 00:26:15.549 "security": 0, 00:26:15.549 "format": 1, 00:26:15.549 "firmware": 0, 00:26:15.549 "ns_manage": 1 00:26:15.549 }, 00:26:15.549 "multi_ctrlr": false, 00:26:15.549 "ana_reporting": false 00:26:15.549 }, 00:26:15.549 "vs": { 00:26:15.549 "nvme_version": "1.4" 00:26:15.550 }, 00:26:15.550 "ns_data": { 00:26:15.550 "id": 1, 00:26:15.550 "can_share": false 00:26:15.550 } 00:26:15.550 } 00:26:15.550 ], 00:26:15.550 "mp_policy": "active_passive" 00:26:15.550 } 00:26:15.550 } 00:26:15.550 ]' 00:26:15.550 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:15.550 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:15.550 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:15.808 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:15.808 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:15.808 10:31:22 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:26:15.808 10:31:22 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:26:15.808 10:31:22 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:15.808 10:31:22 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:26:15.808 10:31:22 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:15.808 10:31:22 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:15.808 10:31:22 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=c940c363-2bb1-4496-ae60-53fd257d88bf 00:26:15.808 10:31:22 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:26:15.808 10:31:22 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c940c363-2bb1-4496-ae60-53fd257d88bf 00:26:16.067 10:31:23 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:16.326 10:31:23 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=f3964d6d-da85-436a-92b9-43e8bf7b701c 00:26:16.326 10:31:23 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f3964d6d-da85-436a-92b9-43e8bf7b701c 00:26:16.587 10:31:23 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:16.587 10:31:23 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:16.587 10:31:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:16.587 10:31:23 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:26:16.587 10:31:23 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:16.587 10:31:23 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:16.587 10:31:23 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:26:16.587 10:31:23 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:16.587 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:16.587 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:16.587 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:16.587 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:16.587 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:16.846 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:16.846 { 00:26:16.846 "name": "d92c4d98-eeeb-4637-97b2-23c44242b469", 00:26:16.846 "aliases": [ 00:26:16.846 "lvs/nvme0n1p0" 00:26:16.846 ], 00:26:16.846 "product_name": "Logical Volume", 00:26:16.846 "block_size": 4096, 00:26:16.846 "num_blocks": 26476544, 00:26:16.846 "uuid": "d92c4d98-eeeb-4637-97b2-23c44242b469", 00:26:16.846 "assigned_rate_limits": { 00:26:16.846 "rw_ios_per_sec": 0, 00:26:16.846 "rw_mbytes_per_sec": 0, 00:26:16.846 "r_mbytes_per_sec": 0, 00:26:16.846 "w_mbytes_per_sec": 0 00:26:16.846 }, 00:26:16.846 "claimed": false, 00:26:16.846 "zoned": false, 00:26:16.846 "supported_io_types": { 00:26:16.846 "read": true, 00:26:16.846 "write": true, 00:26:16.846 "unmap": true, 00:26:16.846 "flush": false, 00:26:16.846 "reset": true, 00:26:16.846 "nvme_admin": false, 00:26:16.846 "nvme_io": false, 00:26:16.846 "nvme_io_md": false, 00:26:16.846 "write_zeroes": true, 00:26:16.846 "zcopy": false, 00:26:16.846 "get_zone_info": false, 00:26:16.846 "zone_management": false, 00:26:16.846 "zone_append": false, 00:26:16.846 "compare": false, 00:26:16.846 "compare_and_write": false, 00:26:16.846 "abort": false, 00:26:16.846 "seek_hole": true, 00:26:16.846 "seek_data": true, 00:26:16.846 "copy": false, 00:26:16.846 "nvme_iov_md": false 00:26:16.846 }, 00:26:16.846 "driver_specific": { 00:26:16.846 "lvol": { 00:26:16.847 "lvol_store_uuid": "f3964d6d-da85-436a-92b9-43e8bf7b701c", 00:26:16.847 "base_bdev": "nvme0n1", 00:26:16.847 "thin_provision": true, 00:26:16.847 "num_allocated_clusters": 0, 00:26:16.847 "snapshot": false, 00:26:16.847 "clone": false, 00:26:16.847 "esnap_clone": false 00:26:16.847 } 00:26:16.847 } 00:26:16.847 } 00:26:16.847 ]' 00:26:16.847 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:16.847 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:16.847 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:16.847 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:16.847 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:16.847 10:31:23 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:16.847 10:31:23 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:26:16.847 10:31:23 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:26:16.847 10:31:23 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:17.106 10:31:24 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:17.106 10:31:24 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:17.106 10:31:24 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:17.106 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:17.106 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:17.106 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:17.106 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:17.106 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:17.366 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:17.366 { 00:26:17.366 "name": "d92c4d98-eeeb-4637-97b2-23c44242b469", 00:26:17.366 "aliases": [ 00:26:17.366 "lvs/nvme0n1p0" 00:26:17.366 ], 00:26:17.366 "product_name": "Logical Volume", 00:26:17.366 "block_size": 4096, 00:26:17.366 "num_blocks": 26476544, 00:26:17.366 "uuid": "d92c4d98-eeeb-4637-97b2-23c44242b469", 00:26:17.366 "assigned_rate_limits": { 00:26:17.366 "rw_ios_per_sec": 0, 00:26:17.366 "rw_mbytes_per_sec": 0, 00:26:17.366 "r_mbytes_per_sec": 0, 00:26:17.366 "w_mbytes_per_sec": 0 00:26:17.366 }, 00:26:17.366 "claimed": false, 00:26:17.366 "zoned": false, 00:26:17.366 "supported_io_types": { 00:26:17.366 "read": true, 00:26:17.366 "write": true, 00:26:17.366 "unmap": true, 00:26:17.366 "flush": false, 00:26:17.366 "reset": true, 00:26:17.366 "nvme_admin": false, 00:26:17.366 "nvme_io": false, 00:26:17.366 "nvme_io_md": false, 00:26:17.366 "write_zeroes": true, 00:26:17.366 "zcopy": false, 00:26:17.366 "get_zone_info": false, 00:26:17.366 "zone_management": false, 00:26:17.366 "zone_append": false, 00:26:17.366 "compare": false, 00:26:17.366 "compare_and_write": false, 00:26:17.366 "abort": false, 00:26:17.366 "seek_hole": true, 00:26:17.366 "seek_data": true, 00:26:17.366 "copy": false, 00:26:17.366 "nvme_iov_md": false 00:26:17.366 }, 00:26:17.366 "driver_specific": { 00:26:17.366 "lvol": { 00:26:17.366 "lvol_store_uuid": "f3964d6d-da85-436a-92b9-43e8bf7b701c", 00:26:17.366 "base_bdev": "nvme0n1", 00:26:17.366 "thin_provision": true, 00:26:17.366 "num_allocated_clusters": 0, 00:26:17.366 "snapshot": false, 00:26:17.366 "clone": false, 00:26:17.366 "esnap_clone": false 00:26:17.366 } 00:26:17.366 } 00:26:17.366 } 00:26:17.366 ]' 00:26:17.366 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:17.366 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:17.366 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:17.366 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:17.366 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:17.366 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:17.366 10:31:24 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:26:17.366 10:31:24 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:17.625 10:31:24 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:17.625 10:31:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:17.625 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:17.625 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:17.625 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:17.625 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:17.625 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d92c4d98-eeeb-4637-97b2-23c44242b469 00:26:17.885 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:17.885 { 00:26:17.885 "name": "d92c4d98-eeeb-4637-97b2-23c44242b469", 00:26:17.885 "aliases": [ 00:26:17.885 "lvs/nvme0n1p0" 00:26:17.885 ], 00:26:17.885 "product_name": "Logical Volume", 00:26:17.885 "block_size": 4096, 00:26:17.885 "num_blocks": 26476544, 00:26:17.885 "uuid": "d92c4d98-eeeb-4637-97b2-23c44242b469", 00:26:17.885 "assigned_rate_limits": { 00:26:17.885 "rw_ios_per_sec": 0, 00:26:17.885 "rw_mbytes_per_sec": 0, 00:26:17.885 "r_mbytes_per_sec": 0, 00:26:17.885 "w_mbytes_per_sec": 0 00:26:17.885 }, 00:26:17.885 "claimed": false, 00:26:17.885 "zoned": false, 00:26:17.885 "supported_io_types": { 00:26:17.885 "read": true, 00:26:17.885 "write": true, 00:26:17.885 "unmap": true, 00:26:17.885 "flush": false, 00:26:17.885 "reset": true, 00:26:17.885 "nvme_admin": false, 00:26:17.885 "nvme_io": false, 00:26:17.885 "nvme_io_md": false, 00:26:17.885 "write_zeroes": true, 00:26:17.885 "zcopy": false, 00:26:17.885 "get_zone_info": false, 00:26:17.885 "zone_management": false, 00:26:17.885 "zone_append": false, 00:26:17.885 "compare": false, 00:26:17.885 "compare_and_write": false, 00:26:17.885 "abort": false, 00:26:17.885 "seek_hole": true, 00:26:17.885 "seek_data": true, 00:26:17.885 "copy": false, 00:26:17.885 "nvme_iov_md": false 00:26:17.885 }, 00:26:17.885 "driver_specific": { 00:26:17.885 "lvol": { 00:26:17.885 "lvol_store_uuid": "f3964d6d-da85-436a-92b9-43e8bf7b701c", 00:26:17.885 "base_bdev": "nvme0n1", 00:26:17.885 "thin_provision": true, 00:26:17.885 "num_allocated_clusters": 0, 00:26:17.885 "snapshot": false, 00:26:17.885 "clone": false, 00:26:17.885 "esnap_clone": false 00:26:17.885 } 00:26:17.885 } 00:26:17.885 } 00:26:17.885 ]' 00:26:17.885 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:17.885 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:17.885 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:17.885 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:17.885 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:17.885 10:31:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:17.885 10:31:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:17.885 10:31:24 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d92c4d98-eeeb-4637-97b2-23c44242b469 --l2p_dram_limit 10' 00:26:17.885 10:31:24 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:17.885 10:31:24 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:17.885 10:31:24 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:17.885 10:31:24 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:26:17.885 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:26:17.885 10:31:24 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d92c4d98-eeeb-4637-97b2-23c44242b469 --l2p_dram_limit 10 -c nvc0n1p0 00:26:18.145 [2024-11-25 10:31:25.143799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.145 [2024-11-25 10:31:25.143861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:18.145 [2024-11-25 10:31:25.143881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:18.145 [2024-11-25 10:31:25.143892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.143959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.146 [2024-11-25 10:31:25.143971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:18.146 [2024-11-25 10:31:25.143985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:18.146 [2024-11-25 10:31:25.143995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.144026] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:18.146 [2024-11-25 10:31:25.145076] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:18.146 [2024-11-25 10:31:25.145118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.146 [2024-11-25 10:31:25.145130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:18.146 [2024-11-25 10:31:25.145144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:26:18.146 [2024-11-25 10:31:25.145155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.145243] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f430ca0c-e16a-40b5-83da-f68ac69b1b9c 00:26:18.146 [2024-11-25 10:31:25.146705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.146 [2024-11-25 10:31:25.146744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:18.146 [2024-11-25 10:31:25.146756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:18.146 [2024-11-25 10:31:25.146772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.154181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.146 [2024-11-25 10:31:25.154222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:18.146 [2024-11-25 10:31:25.154235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.369 ms 00:26:18.146 [2024-11-25 10:31:25.154248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.154347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.146 [2024-11-25 10:31:25.154364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:18.146 [2024-11-25 10:31:25.154375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:18.146 [2024-11-25 10:31:25.154392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.154448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.146 [2024-11-25 10:31:25.154463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:18.146 [2024-11-25 10:31:25.154477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:18.146 [2024-11-25 10:31:25.154502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.154527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:18.146 [2024-11-25 10:31:25.159798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.146 [2024-11-25 10:31:25.159834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:18.146 [2024-11-25 10:31:25.159851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.283 ms 00:26:18.146 [2024-11-25 10:31:25.159861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.159899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.146 [2024-11-25 10:31:25.159911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:18.146 [2024-11-25 10:31:25.159923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:18.146 [2024-11-25 10:31:25.159933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.159979] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:18.146 [2024-11-25 10:31:25.160107] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:18.146 [2024-11-25 10:31:25.160127] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:18.146 [2024-11-25 10:31:25.160140] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:18.146 [2024-11-25 10:31:25.160160] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:18.146 [2024-11-25 10:31:25.160177] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:18.146 [2024-11-25 10:31:25.160198] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:18.146 [2024-11-25 10:31:25.160209] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:18.146 [2024-11-25 10:31:25.160224] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:18.146 [2024-11-25 10:31:25.160234] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:18.146 [2024-11-25 10:31:25.160248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.146 [2024-11-25 10:31:25.160269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:18.146 [2024-11-25 10:31:25.160283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:26:18.146 [2024-11-25 10:31:25.160294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.160371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.146 [2024-11-25 10:31:25.160385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:18.146 [2024-11-25 10:31:25.160405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:18.146 [2024-11-25 10:31:25.160418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.146 [2024-11-25 10:31:25.160528] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:18.146 [2024-11-25 10:31:25.160555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:18.146 [2024-11-25 10:31:25.160569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:18.146 [2024-11-25 10:31:25.160579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.146 [2024-11-25 10:31:25.160593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:18.146 [2024-11-25 10:31:25.160602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:18.146 [2024-11-25 10:31:25.160614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:18.146 [2024-11-25 10:31:25.160623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:18.146 [2024-11-25 10:31:25.160635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:18.146 [2024-11-25 10:31:25.160644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:18.146 [2024-11-25 10:31:25.160661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:18.146 [2024-11-25 10:31:25.160677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:18.146 [2024-11-25 10:31:25.160692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:18.146 [2024-11-25 10:31:25.160702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:18.146 [2024-11-25 10:31:25.160714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:18.146 [2024-11-25 10:31:25.160723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.146 [2024-11-25 10:31:25.160738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:18.146 [2024-11-25 10:31:25.160755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:18.146 [2024-11-25 10:31:25.160777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.146 [2024-11-25 10:31:25.160794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:18.146 [2024-11-25 10:31:25.160806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:18.146 [2024-11-25 10:31:25.160816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.146 [2024-11-25 10:31:25.160827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:18.146 [2024-11-25 10:31:25.160836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:18.146 [2024-11-25 10:31:25.160849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.146 [2024-11-25 10:31:25.160858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:18.146 [2024-11-25 10:31:25.160870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:18.146 [2024-11-25 10:31:25.160879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.146 [2024-11-25 10:31:25.160890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:18.146 [2024-11-25 10:31:25.160899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:18.146 [2024-11-25 10:31:25.160911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.146 [2024-11-25 10:31:25.160920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:18.146 [2024-11-25 10:31:25.160934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:18.146 [2024-11-25 10:31:25.160943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:18.146 [2024-11-25 10:31:25.160954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:18.146 [2024-11-25 10:31:25.160963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:18.146 [2024-11-25 10:31:25.160981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:18.146 [2024-11-25 10:31:25.160997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:18.146 [2024-11-25 10:31:25.161017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:18.146 [2024-11-25 10:31:25.161027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.146 [2024-11-25 10:31:25.161039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:18.146 [2024-11-25 10:31:25.161048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:18.146 [2024-11-25 10:31:25.161060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.146 [2024-11-25 10:31:25.161069] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:18.146 [2024-11-25 10:31:25.161082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:18.146 [2024-11-25 10:31:25.161092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:18.146 [2024-11-25 10:31:25.161105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.146 [2024-11-25 10:31:25.161116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:18.146 [2024-11-25 10:31:25.161130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:18.147 [2024-11-25 10:31:25.161139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:18.147 [2024-11-25 10:31:25.161151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:18.147 [2024-11-25 10:31:25.161160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:18.147 [2024-11-25 10:31:25.161172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:18.147 [2024-11-25 10:31:25.161186] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:18.147 [2024-11-25 10:31:25.161205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:18.147 [2024-11-25 10:31:25.161221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:18.147 [2024-11-25 10:31:25.161242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:18.147 [2024-11-25 10:31:25.161261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:18.147 [2024-11-25 10:31:25.161279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:18.147 [2024-11-25 10:31:25.161290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:18.147 [2024-11-25 10:31:25.161313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:18.147 [2024-11-25 10:31:25.161324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:18.147 [2024-11-25 10:31:25.161337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:18.147 [2024-11-25 10:31:25.161347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:18.147 [2024-11-25 10:31:25.161362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:18.147 [2024-11-25 10:31:25.161372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:18.147 [2024-11-25 10:31:25.161387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:18.147 [2024-11-25 10:31:25.161405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:18.147 [2024-11-25 10:31:25.161426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:18.147 [2024-11-25 10:31:25.161437] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:18.147 [2024-11-25 10:31:25.161451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:18.147 [2024-11-25 10:31:25.161463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:18.147 [2024-11-25 10:31:25.161475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:18.147 [2024-11-25 10:31:25.161486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:18.147 [2024-11-25 10:31:25.161511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:18.147 [2024-11-25 10:31:25.161523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.147 [2024-11-25 10:31:25.161543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:18.147 [2024-11-25 10:31:25.161558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:26:18.147 [2024-11-25 10:31:25.161571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.147 [2024-11-25 10:31:25.161617] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:18.147 [2024-11-25 10:31:25.161634] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:21.434 [2024-11-25 10:31:28.402733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.434 [2024-11-25 10:31:28.402796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:21.434 [2024-11-25 10:31:28.402813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3246.374 ms 00:26:21.434 [2024-11-25 10:31:28.402827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.434 [2024-11-25 10:31:28.440726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.434 [2024-11-25 10:31:28.440780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:21.434 [2024-11-25 10:31:28.440796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.584 ms 00:26:21.434 [2024-11-25 10:31:28.440810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.434 [2024-11-25 10:31:28.440952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.434 [2024-11-25 10:31:28.440968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:21.434 [2024-11-25 10:31:28.440979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:21.434 [2024-11-25 10:31:28.440998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.434 [2024-11-25 10:31:28.486222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.434 [2024-11-25 10:31:28.486269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:21.434 [2024-11-25 10:31:28.486284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.255 ms 00:26:21.434 [2024-11-25 10:31:28.486297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.434 [2024-11-25 10:31:28.486343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.434 [2024-11-25 10:31:28.486357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:21.434 [2024-11-25 10:31:28.486368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:21.434 [2024-11-25 10:31:28.486391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.434 [2024-11-25 10:31:28.486900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.434 [2024-11-25 10:31:28.486926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:21.434 [2024-11-25 10:31:28.486937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:26:21.434 [2024-11-25 10:31:28.486950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.434 [2024-11-25 10:31:28.487050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.434 [2024-11-25 10:31:28.487067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:21.434 [2024-11-25 10:31:28.487081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:21.434 [2024-11-25 10:31:28.487096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.434 [2024-11-25 10:31:28.507630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.434 [2024-11-25 10:31:28.507678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:21.434 [2024-11-25 10:31:28.507692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.547 ms 00:26:21.434 [2024-11-25 10:31:28.507705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.434 [2024-11-25 10:31:28.520235] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:21.434 [2024-11-25 10:31:28.523451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.434 [2024-11-25 10:31:28.523485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:21.434 [2024-11-25 10:31:28.523514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.674 ms 00:26:21.434 [2024-11-25 10:31:28.523525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.693 [2024-11-25 10:31:28.611684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.693 [2024-11-25 10:31:28.611747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:21.693 [2024-11-25 10:31:28.611765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.264 ms 00:26:21.693 [2024-11-25 10:31:28.611777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.693 [2024-11-25 10:31:28.611962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.693 [2024-11-25 10:31:28.611979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:21.693 [2024-11-25 10:31:28.611996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:26:21.693 [2024-11-25 10:31:28.612007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.693 [2024-11-25 10:31:28.648365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.693 [2024-11-25 10:31:28.648408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:21.693 [2024-11-25 10:31:28.648425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.362 ms 00:26:21.693 [2024-11-25 10:31:28.648436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.693 [2024-11-25 10:31:28.683903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.693 [2024-11-25 10:31:28.683941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:21.693 [2024-11-25 10:31:28.683958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.475 ms 00:26:21.693 [2024-11-25 10:31:28.683968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.693 [2024-11-25 10:31:28.684665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.693 [2024-11-25 10:31:28.684693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:21.693 [2024-11-25 10:31:28.684709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:26:21.693 [2024-11-25 10:31:28.684722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.693 [2024-11-25 10:31:28.781468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.693 [2024-11-25 10:31:28.781526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:21.693 [2024-11-25 10:31:28.781548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.844 ms 00:26:21.693 [2024-11-25 10:31:28.781559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.953 [2024-11-25 10:31:28.818686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.953 [2024-11-25 10:31:28.818731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:21.953 [2024-11-25 10:31:28.818749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.097 ms 00:26:21.953 [2024-11-25 10:31:28.818760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.953 [2024-11-25 10:31:28.854476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.953 [2024-11-25 10:31:28.854527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:21.953 [2024-11-25 10:31:28.854544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.726 ms 00:26:21.953 [2024-11-25 10:31:28.854554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.953 [2024-11-25 10:31:28.890747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.953 [2024-11-25 10:31:28.890788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:21.953 [2024-11-25 10:31:28.890805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.204 ms 00:26:21.953 [2024-11-25 10:31:28.890816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.953 [2024-11-25 10:31:28.890863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.953 [2024-11-25 10:31:28.890875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:21.953 [2024-11-25 10:31:28.890892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:21.953 [2024-11-25 10:31:28.890903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.953 [2024-11-25 10:31:28.891018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.953 [2024-11-25 10:31:28.891033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:21.953 [2024-11-25 10:31:28.891047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:21.953 [2024-11-25 10:31:28.891057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.953 [2024-11-25 10:31:28.892306] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3754.138 ms, result 0 00:26:21.953 { 00:26:21.953 "name": "ftl0", 00:26:21.953 "uuid": "f430ca0c-e16a-40b5-83da-f68ac69b1b9c" 00:26:21.953 } 00:26:21.953 10:31:28 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:21.953 10:31:28 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:22.213 10:31:29 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:26:22.213 10:31:29 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:22.213 [2024-11-25 10:31:29.318756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.214 [2024-11-25 10:31:29.318821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:22.214 [2024-11-25 10:31:29.318839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:22.214 [2024-11-25 10:31:29.318852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.214 [2024-11-25 10:31:29.318878] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:22.214 [2024-11-25 10:31:29.323033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.214 [2024-11-25 10:31:29.323068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:22.214 [2024-11-25 10:31:29.323083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:26:22.214 [2024-11-25 10:31:29.323093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.214 [2024-11-25 10:31:29.323358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.214 [2024-11-25 10:31:29.323376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:22.214 [2024-11-25 10:31:29.323389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:26:22.214 [2024-11-25 10:31:29.323399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.473 [2024-11-25 10:31:29.325921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.473 [2024-11-25 10:31:29.325949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:22.473 [2024-11-25 10:31:29.325963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:26:22.473 [2024-11-25 10:31:29.325973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.473 [2024-11-25 10:31:29.330980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.473 [2024-11-25 10:31:29.331016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:22.473 [2024-11-25 10:31:29.331034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.991 ms 00:26:22.473 [2024-11-25 10:31:29.331044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.473 [2024-11-25 10:31:29.367608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.473 [2024-11-25 10:31:29.367649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:22.473 [2024-11-25 10:31:29.367666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.568 ms 00:26:22.473 [2024-11-25 10:31:29.367677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.473 [2024-11-25 10:31:29.388565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.473 [2024-11-25 10:31:29.388605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:22.473 [2024-11-25 10:31:29.388622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.873 ms 00:26:22.473 [2024-11-25 10:31:29.388633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.473 [2024-11-25 10:31:29.388783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.473 [2024-11-25 10:31:29.388797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:22.473 [2024-11-25 10:31:29.388811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:26:22.473 [2024-11-25 10:31:29.388821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.473 [2024-11-25 10:31:29.424605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.473 [2024-11-25 10:31:29.424646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:22.473 [2024-11-25 10:31:29.424661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.818 ms 00:26:22.473 [2024-11-25 10:31:29.424672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.473 [2024-11-25 10:31:29.459966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.473 [2024-11-25 10:31:29.460006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:22.473 [2024-11-25 10:31:29.460022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.309 ms 00:26:22.473 [2024-11-25 10:31:29.460032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.473 [2024-11-25 10:31:29.495147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.473 [2024-11-25 10:31:29.495189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:22.473 [2024-11-25 10:31:29.495206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.124 ms 00:26:22.473 [2024-11-25 10:31:29.495216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.473 [2024-11-25 10:31:29.530940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.473 [2024-11-25 10:31:29.530993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:22.473 [2024-11-25 10:31:29.531010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.663 ms 00:26:22.473 [2024-11-25 10:31:29.531021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.473 [2024-11-25 10:31:29.531073] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:22.473 [2024-11-25 10:31:29.531090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:22.473 [2024-11-25 10:31:29.531438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.531993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:22.474 [2024-11-25 10:31:29.532336] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:22.474 [2024-11-25 10:31:29.532348] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f430ca0c-e16a-40b5-83da-f68ac69b1b9c 00:26:22.474 [2024-11-25 10:31:29.532359] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:22.474 [2024-11-25 10:31:29.532374] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:22.474 [2024-11-25 10:31:29.532386] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:22.474 [2024-11-25 10:31:29.532399] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:22.474 [2024-11-25 10:31:29.532409] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:22.474 [2024-11-25 10:31:29.532421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:22.474 [2024-11-25 10:31:29.532432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:22.474 [2024-11-25 10:31:29.532443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:22.474 [2024-11-25 10:31:29.532452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:22.474 [2024-11-25 10:31:29.532464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.474 [2024-11-25 10:31:29.532475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:22.474 [2024-11-25 10:31:29.532488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:26:22.474 [2024-11-25 10:31:29.532508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.475 [2024-11-25 10:31:29.552738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.475 [2024-11-25 10:31:29.552780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:22.475 [2024-11-25 10:31:29.552796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.198 ms 00:26:22.475 [2024-11-25 10:31:29.552807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.475 [2024-11-25 10:31:29.553373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:22.475 [2024-11-25 10:31:29.553395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:22.475 [2024-11-25 10:31:29.553553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:26:22.475 [2024-11-25 10:31:29.553563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.734 [2024-11-25 10:31:29.621322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.734 [2024-11-25 10:31:29.621372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:22.734 [2024-11-25 10:31:29.621389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.734 [2024-11-25 10:31:29.621400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.734 [2024-11-25 10:31:29.621474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.734 [2024-11-25 10:31:29.621485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:22.734 [2024-11-25 10:31:29.621518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.734 [2024-11-25 10:31:29.621528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.734 [2024-11-25 10:31:29.621641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.734 [2024-11-25 10:31:29.621656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:22.734 [2024-11-25 10:31:29.621669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.734 [2024-11-25 10:31:29.621679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.734 [2024-11-25 10:31:29.621704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.734 [2024-11-25 10:31:29.621714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:22.734 [2024-11-25 10:31:29.621727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.734 [2024-11-25 10:31:29.621739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.734 [2024-11-25 10:31:29.749366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.734 [2024-11-25 10:31:29.749434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:22.734 [2024-11-25 10:31:29.749452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.734 [2024-11-25 10:31:29.749463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.993 [2024-11-25 10:31:29.853352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.993 [2024-11-25 10:31:29.853417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:22.993 [2024-11-25 10:31:29.853435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.993 [2024-11-25 10:31:29.853449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.993 [2024-11-25 10:31:29.853585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.993 [2024-11-25 10:31:29.853599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:22.993 [2024-11-25 10:31:29.853613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.993 [2024-11-25 10:31:29.853624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.993 [2024-11-25 10:31:29.853687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.993 [2024-11-25 10:31:29.853699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:22.993 [2024-11-25 10:31:29.853712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.993 [2024-11-25 10:31:29.853722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.993 [2024-11-25 10:31:29.853847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.993 [2024-11-25 10:31:29.853861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:22.993 [2024-11-25 10:31:29.853874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.993 [2024-11-25 10:31:29.853884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.993 [2024-11-25 10:31:29.853929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.993 [2024-11-25 10:31:29.853942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:22.993 [2024-11-25 10:31:29.853954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.993 [2024-11-25 10:31:29.853964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.994 [2024-11-25 10:31:29.854008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.994 [2024-11-25 10:31:29.854020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:22.994 [2024-11-25 10:31:29.854033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.994 [2024-11-25 10:31:29.854043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.994 [2024-11-25 10:31:29.854090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:22.994 [2024-11-25 10:31:29.854102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:22.994 [2024-11-25 10:31:29.854115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:22.994 [2024-11-25 10:31:29.854124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:22.994 [2024-11-25 10:31:29.854257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.339 ms, result 0 00:26:22.994 true 00:26:22.994 10:31:29 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 78949 00:26:22.994 10:31:29 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78949 ']' 00:26:22.994 10:31:29 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78949 00:26:22.994 10:31:29 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:26:22.994 10:31:29 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.994 10:31:29 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78949 00:26:22.994 10:31:29 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.994 10:31:29 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.994 killing process with pid 78949 00:26:22.994 10:31:29 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78949' 00:26:22.994 10:31:29 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 78949 00:26:22.994 10:31:29 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 78949 00:26:28.265 10:31:35 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:32.687 262144+0 records in 00:26:32.687 262144+0 records out 00:26:32.687 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.25865 s, 252 MB/s 00:26:32.687 10:31:39 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:34.066 10:31:41 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:34.066 [2024-11-25 10:31:41.147998] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:26:34.066 [2024-11-25 10:31:41.148145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79189 ] 00:26:34.325 [2024-11-25 10:31:41.329357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.584 [2024-11-25 10:31:41.446763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.844 [2024-11-25 10:31:41.826663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:34.844 [2024-11-25 10:31:41.826738] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:35.104 [2024-11-25 10:31:41.994533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.104 [2024-11-25 10:31:41.994591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:35.104 [2024-11-25 10:31:41.994607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:35.104 [2024-11-25 10:31:41.994617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.104 [2024-11-25 10:31:41.994665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.104 [2024-11-25 10:31:41.994680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:35.104 [2024-11-25 10:31:41.994690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:35.104 [2024-11-25 10:31:41.994701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.104 [2024-11-25 10:31:41.994721] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:35.105 [2024-11-25 10:31:41.995676] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:35.105 [2024-11-25 10:31:41.995705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.105 [2024-11-25 10:31:41.995716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:35.105 [2024-11-25 10:31:41.995727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:26:35.105 [2024-11-25 10:31:41.995737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.105 [2024-11-25 10:31:41.997137] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:35.105 [2024-11-25 10:31:42.016123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.105 [2024-11-25 10:31:42.016165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:35.105 [2024-11-25 10:31:42.016180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.016 ms 00:26:35.105 [2024-11-25 10:31:42.016190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.105 [2024-11-25 10:31:42.016264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.105 [2024-11-25 10:31:42.016277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:35.105 [2024-11-25 10:31:42.016288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:35.105 [2024-11-25 10:31:42.016298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.105 [2024-11-25 10:31:42.022949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.105 [2024-11-25 10:31:42.022982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:35.105 [2024-11-25 10:31:42.022995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.589 ms 00:26:35.105 [2024-11-25 10:31:42.023013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.105 [2024-11-25 10:31:42.023113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.105 [2024-11-25 10:31:42.023126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:35.105 [2024-11-25 10:31:42.023137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:26:35.105 [2024-11-25 10:31:42.023147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.105 [2024-11-25 10:31:42.023187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.105 [2024-11-25 10:31:42.023199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:35.105 [2024-11-25 10:31:42.023209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:35.105 [2024-11-25 10:31:42.023219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.105 [2024-11-25 10:31:42.023250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:35.105 [2024-11-25 10:31:42.028177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.105 [2024-11-25 10:31:42.028211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:35.105 [2024-11-25 10:31:42.028230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.940 ms 00:26:35.105 [2024-11-25 10:31:42.028241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.105 [2024-11-25 10:31:42.028270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.105 [2024-11-25 10:31:42.028280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:35.105 [2024-11-25 10:31:42.028291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:35.105 [2024-11-25 10:31:42.028300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.105 [2024-11-25 10:31:42.028350] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:35.105 [2024-11-25 10:31:42.028377] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:35.105 [2024-11-25 10:31:42.028411] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:35.105 [2024-11-25 10:31:42.028434] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:35.105 [2024-11-25 10:31:42.028533] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:35.105 [2024-11-25 10:31:42.028547] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:35.105 [2024-11-25 10:31:42.028560] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:35.105 [2024-11-25 10:31:42.028573] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:35.105 [2024-11-25 10:31:42.028585] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:35.105 [2024-11-25 10:31:42.028596] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:35.105 [2024-11-25 10:31:42.028605] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:35.105 [2024-11-25 10:31:42.028615] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:35.105 [2024-11-25 10:31:42.028631] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:35.105 [2024-11-25 10:31:42.028641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.105 [2024-11-25 10:31:42.028651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:35.105 [2024-11-25 10:31:42.028661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:26:35.105 [2024-11-25 10:31:42.028671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.105 [2024-11-25 10:31:42.028742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.105 [2024-11-25 10:31:42.028753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:35.105 [2024-11-25 10:31:42.028763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:35.105 [2024-11-25 10:31:42.028772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.105 [2024-11-25 10:31:42.028871] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:35.105 [2024-11-25 10:31:42.028887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:35.105 [2024-11-25 10:31:42.028898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:35.105 [2024-11-25 10:31:42.028908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.105 [2024-11-25 10:31:42.028918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:35.105 [2024-11-25 10:31:42.028927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:35.105 [2024-11-25 10:31:42.028937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:35.105 [2024-11-25 10:31:42.028946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:35.105 [2024-11-25 10:31:42.028955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:35.105 [2024-11-25 10:31:42.028965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:35.105 [2024-11-25 10:31:42.028974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:35.105 [2024-11-25 10:31:42.028983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:35.105 [2024-11-25 10:31:42.028993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:35.105 [2024-11-25 10:31:42.029015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:35.105 [2024-11-25 10:31:42.029025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:35.105 [2024-11-25 10:31:42.029034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.105 [2024-11-25 10:31:42.029043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:35.105 [2024-11-25 10:31:42.029052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:35.105 [2024-11-25 10:31:42.029061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.105 [2024-11-25 10:31:42.029071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:35.105 [2024-11-25 10:31:42.029080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:35.105 [2024-11-25 10:31:42.029089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:35.105 [2024-11-25 10:31:42.029098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:35.105 [2024-11-25 10:31:42.029107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:35.105 [2024-11-25 10:31:42.029116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:35.105 [2024-11-25 10:31:42.029125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:35.106 [2024-11-25 10:31:42.029134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:35.106 [2024-11-25 10:31:42.029143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:35.106 [2024-11-25 10:31:42.029152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:35.106 [2024-11-25 10:31:42.029161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:35.106 [2024-11-25 10:31:42.029170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:35.106 [2024-11-25 10:31:42.029179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:35.106 [2024-11-25 10:31:42.029188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:35.106 [2024-11-25 10:31:42.029197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:35.106 [2024-11-25 10:31:42.029206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:35.106 [2024-11-25 10:31:42.029215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:35.106 [2024-11-25 10:31:42.029223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:35.106 [2024-11-25 10:31:42.029232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:35.106 [2024-11-25 10:31:42.029241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:35.106 [2024-11-25 10:31:42.029250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.106 [2024-11-25 10:31:42.029259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:35.106 [2024-11-25 10:31:42.029269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:35.106 [2024-11-25 10:31:42.029278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.106 [2024-11-25 10:31:42.029287] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:35.106 [2024-11-25 10:31:42.029297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:35.106 [2024-11-25 10:31:42.029315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:35.106 [2024-11-25 10:31:42.029324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:35.106 [2024-11-25 10:31:42.029334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:35.106 [2024-11-25 10:31:42.029344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:35.106 [2024-11-25 10:31:42.029353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:35.106 [2024-11-25 10:31:42.029362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:35.106 [2024-11-25 10:31:42.029371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:35.106 [2024-11-25 10:31:42.029380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:35.106 [2024-11-25 10:31:42.029391] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:35.106 [2024-11-25 10:31:42.029403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:35.106 [2024-11-25 10:31:42.029421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:35.106 [2024-11-25 10:31:42.029431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:35.106 [2024-11-25 10:31:42.029442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:35.106 [2024-11-25 10:31:42.029452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:35.106 [2024-11-25 10:31:42.029462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:35.106 [2024-11-25 10:31:42.029472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:35.106 [2024-11-25 10:31:42.029483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:35.106 [2024-11-25 10:31:42.029504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:35.106 [2024-11-25 10:31:42.029515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:35.106 [2024-11-25 10:31:42.029525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:35.106 [2024-11-25 10:31:42.029537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:35.106 [2024-11-25 10:31:42.029547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:35.106 [2024-11-25 10:31:42.029557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:35.106 [2024-11-25 10:31:42.029568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:35.106 [2024-11-25 10:31:42.029578] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:35.106 [2024-11-25 10:31:42.029589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:35.106 [2024-11-25 10:31:42.029600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:35.106 [2024-11-25 10:31:42.029610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:35.106 [2024-11-25 10:31:42.029621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:35.106 [2024-11-25 10:31:42.029631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:35.106 [2024-11-25 10:31:42.029642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.106 [2024-11-25 10:31:42.029652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:35.106 [2024-11-25 10:31:42.029662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:26:35.106 [2024-11-25 10:31:42.029672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.106 [2024-11-25 10:31:42.066413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.106 [2024-11-25 10:31:42.066455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:35.106 [2024-11-25 10:31:42.066469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.754 ms 00:26:35.106 [2024-11-25 10:31:42.066487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.106 [2024-11-25 10:31:42.066571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.106 [2024-11-25 10:31:42.066583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:35.106 [2024-11-25 10:31:42.066594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:35.106 [2024-11-25 10:31:42.066604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.106 [2024-11-25 10:31:42.128226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.106 [2024-11-25 10:31:42.128265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:35.106 [2024-11-25 10:31:42.128278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.660 ms 00:26:35.106 [2024-11-25 10:31:42.128288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.106 [2024-11-25 10:31:42.128323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.106 [2024-11-25 10:31:42.128335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:35.106 [2024-11-25 10:31:42.128353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:35.106 [2024-11-25 10:31:42.128364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.106 [2024-11-25 10:31:42.128850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.106 [2024-11-25 10:31:42.128872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:35.106 [2024-11-25 10:31:42.128883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:26:35.106 [2024-11-25 10:31:42.128893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.106 [2024-11-25 10:31:42.129013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.106 [2024-11-25 10:31:42.129027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:35.106 [2024-11-25 10:31:42.129046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:26:35.106 [2024-11-25 10:31:42.129057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.107 [2024-11-25 10:31:42.149326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.107 [2024-11-25 10:31:42.149364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:35.107 [2024-11-25 10:31:42.149381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.282 ms 00:26:35.107 [2024-11-25 10:31:42.149392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.107 [2024-11-25 10:31:42.168760] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:35.107 [2024-11-25 10:31:42.168802] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:35.107 [2024-11-25 10:31:42.168816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.107 [2024-11-25 10:31:42.168827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:35.107 [2024-11-25 10:31:42.168839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.340 ms 00:26:35.107 [2024-11-25 10:31:42.168849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.107 [2024-11-25 10:31:42.198678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.107 [2024-11-25 10:31:42.198732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:35.107 [2024-11-25 10:31:42.198747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.837 ms 00:26:35.107 [2024-11-25 10:31:42.198758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.366 [2024-11-25 10:31:42.217111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.366 [2024-11-25 10:31:42.217151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:35.366 [2024-11-25 10:31:42.217164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.331 ms 00:26:35.366 [2024-11-25 10:31:42.217174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.366 [2024-11-25 10:31:42.235484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.366 [2024-11-25 10:31:42.235529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:35.366 [2024-11-25 10:31:42.235541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.301 ms 00:26:35.366 [2024-11-25 10:31:42.235552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.366 [2024-11-25 10:31:42.236369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.366 [2024-11-25 10:31:42.236401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:35.366 [2024-11-25 10:31:42.236413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:26:35.367 [2024-11-25 10:31:42.236430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.367 [2024-11-25 10:31:42.323507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.367 [2024-11-25 10:31:42.323568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:35.367 [2024-11-25 10:31:42.323584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.194 ms 00:26:35.367 [2024-11-25 10:31:42.323601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.367 [2024-11-25 10:31:42.334346] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:35.367 [2024-11-25 10:31:42.337209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.367 [2024-11-25 10:31:42.337239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:35.367 [2024-11-25 10:31:42.337253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.580 ms 00:26:35.367 [2024-11-25 10:31:42.337264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.367 [2024-11-25 10:31:42.337361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.367 [2024-11-25 10:31:42.337375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:35.367 [2024-11-25 10:31:42.337387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:35.367 [2024-11-25 10:31:42.337397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.367 [2024-11-25 10:31:42.337487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.367 [2024-11-25 10:31:42.337509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:35.367 [2024-11-25 10:31:42.337520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:35.367 [2024-11-25 10:31:42.337530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.367 [2024-11-25 10:31:42.337556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.367 [2024-11-25 10:31:42.337567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:35.367 [2024-11-25 10:31:42.337577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:35.367 [2024-11-25 10:31:42.337587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.367 [2024-11-25 10:31:42.337616] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:35.367 [2024-11-25 10:31:42.337631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.367 [2024-11-25 10:31:42.337641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:35.367 [2024-11-25 10:31:42.337650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:35.367 [2024-11-25 10:31:42.337660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.367 [2024-11-25 10:31:42.373858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.367 [2024-11-25 10:31:42.373901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:35.367 [2024-11-25 10:31:42.373915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.234 ms 00:26:35.367 [2024-11-25 10:31:42.373926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.367 [2024-11-25 10:31:42.374008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:35.367 [2024-11-25 10:31:42.374020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:35.367 [2024-11-25 10:31:42.374032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:35.367 [2024-11-25 10:31:42.374041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.367 [2024-11-25 10:31:42.375099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.774 ms, result 0 00:26:36.304  [2024-11-25T10:31:44.794Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-25T10:31:45.733Z] Copying: 56/1024 [MB] (28 MBps) [2024-11-25T10:31:46.668Z] Copying: 85/1024 [MB] (28 MBps) [2024-11-25T10:31:47.604Z] Copying: 113/1024 [MB] (28 MBps) [2024-11-25T10:31:48.584Z] Copying: 142/1024 [MB] (29 MBps) [2024-11-25T10:31:49.532Z] Copying: 169/1024 [MB] (26 MBps) [2024-11-25T10:31:50.467Z] Copying: 197/1024 [MB] (27 MBps) [2024-11-25T10:31:51.403Z] Copying: 225/1024 [MB] (27 MBps) [2024-11-25T10:31:52.781Z] Copying: 252/1024 [MB] (27 MBps) [2024-11-25T10:31:53.718Z] Copying: 280/1024 [MB] (27 MBps) [2024-11-25T10:31:54.655Z] Copying: 307/1024 [MB] (26 MBps) [2024-11-25T10:31:55.590Z] Copying: 333/1024 [MB] (26 MBps) [2024-11-25T10:31:56.527Z] Copying: 362/1024 [MB] (28 MBps) [2024-11-25T10:31:57.464Z] Copying: 389/1024 [MB] (27 MBps) [2024-11-25T10:31:58.398Z] Copying: 416/1024 [MB] (27 MBps) [2024-11-25T10:31:59.777Z] Copying: 443/1024 [MB] (26 MBps) [2024-11-25T10:32:00.713Z] Copying: 470/1024 [MB] (27 MBps) [2024-11-25T10:32:01.652Z] Copying: 498/1024 [MB] (27 MBps) [2024-11-25T10:32:02.592Z] Copying: 526/1024 [MB] (28 MBps) [2024-11-25T10:32:03.530Z] Copying: 554/1024 [MB] (27 MBps) [2024-11-25T10:32:04.469Z] Copying: 581/1024 [MB] (27 MBps) [2024-11-25T10:32:05.407Z] Copying: 608/1024 [MB] (26 MBps) [2024-11-25T10:32:06.786Z] Copying: 634/1024 [MB] (26 MBps) [2024-11-25T10:32:07.355Z] Copying: 661/1024 [MB] (27 MBps) [2024-11-25T10:32:08.737Z] Copying: 689/1024 [MB] (27 MBps) [2024-11-25T10:32:09.676Z] Copying: 716/1024 [MB] (27 MBps) [2024-11-25T10:32:10.613Z] Copying: 743/1024 [MB] (27 MBps) [2024-11-25T10:32:11.551Z] Copying: 772/1024 [MB] (29 MBps) [2024-11-25T10:32:12.585Z] Copying: 800/1024 [MB] (27 MBps) [2024-11-25T10:32:13.523Z] Copying: 827/1024 [MB] (27 MBps) [2024-11-25T10:32:14.460Z] Copying: 855/1024 [MB] (27 MBps) [2024-11-25T10:32:15.396Z] Copying: 882/1024 [MB] (26 MBps) [2024-11-25T10:32:16.343Z] Copying: 909/1024 [MB] (27 MBps) [2024-11-25T10:32:17.720Z] Copying: 936/1024 [MB] (26 MBps) [2024-11-25T10:32:18.656Z] Copying: 963/1024 [MB] (26 MBps) [2024-11-25T10:32:19.595Z] Copying: 991/1024 [MB] (27 MBps) [2024-11-25T10:32:19.595Z] Copying: 1019/1024 [MB] (27 MBps) [2024-11-25T10:32:19.595Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-25 10:32:19.505445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-25 10:32:19.505511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:12.483 [2024-11-25 10:32:19.505529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:12.483 [2024-11-25 10:32:19.505539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-25 10:32:19.505573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:12.483 [2024-11-25 10:32:19.509759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-25 10:32:19.509799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:12.483 [2024-11-25 10:32:19.509813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:27:12.483 [2024-11-25 10:32:19.509830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-25 10:32:19.511737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-25 10:32:19.511776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:12.483 [2024-11-25 10:32:19.511790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.881 ms 00:27:12.483 [2024-11-25 10:32:19.511799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-25 10:32:19.529413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-25 10:32:19.529451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:12.483 [2024-11-25 10:32:19.529463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.624 ms 00:27:12.483 [2024-11-25 10:32:19.529475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-25 10:32:19.534550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-25 10:32:19.534690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:12.483 [2024-11-25 10:32:19.534709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.028 ms 00:27:12.483 [2024-11-25 10:32:19.534719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.483 [2024-11-25 10:32:19.571687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.483 [2024-11-25 10:32:19.571727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:12.483 [2024-11-25 10:32:19.571740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.966 ms 00:27:12.483 [2024-11-25 10:32:19.571750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.742 [2024-11-25 10:32:19.592750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.742 [2024-11-25 10:32:19.592804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:12.742 [2024-11-25 10:32:19.592818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.996 ms 00:27:12.742 [2024-11-25 10:32:19.592828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.742 [2024-11-25 10:32:19.592990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.742 [2024-11-25 10:32:19.593007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:12.742 [2024-11-25 10:32:19.593018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:27:12.742 [2024-11-25 10:32:19.593028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.742 [2024-11-25 10:32:19.629906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.742 [2024-11-25 10:32:19.629948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:12.742 [2024-11-25 10:32:19.629962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.921 ms 00:27:12.742 [2024-11-25 10:32:19.629972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.742 [2024-11-25 10:32:19.666048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.742 [2024-11-25 10:32:19.666089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:12.742 [2024-11-25 10:32:19.666102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.094 ms 00:27:12.743 [2024-11-25 10:32:19.666112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-25 10:32:19.702571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.743 [2024-11-25 10:32:19.702616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:12.743 [2024-11-25 10:32:19.702631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.478 ms 00:27:12.743 [2024-11-25 10:32:19.702640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-25 10:32:19.738804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.743 [2024-11-25 10:32:19.738846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:12.743 [2024-11-25 10:32:19.738861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.118 ms 00:27:12.743 [2024-11-25 10:32:19.738870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.743 [2024-11-25 10:32:19.738909] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:12.743 [2024-11-25 10:32:19.738925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.738944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.738956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.738967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.738978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.738988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.738999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:12.743 [2024-11-25 10:32:19.739581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.739998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.740008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.740019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:12.744 [2024-11-25 10:32:19.740036] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:12.744 [2024-11-25 10:32:19.740046] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f430ca0c-e16a-40b5-83da-f68ac69b1b9c 00:27:12.744 [2024-11-25 10:32:19.740062] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:12.744 [2024-11-25 10:32:19.740072] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:12.744 [2024-11-25 10:32:19.740081] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:12.744 [2024-11-25 10:32:19.740091] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:12.744 [2024-11-25 10:32:19.740101] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:12.744 [2024-11-25 10:32:19.740122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:12.744 [2024-11-25 10:32:19.740133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:12.744 [2024-11-25 10:32:19.740142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:12.744 [2024-11-25 10:32:19.740150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:12.744 [2024-11-25 10:32:19.740160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.744 [2024-11-25 10:32:19.740169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:12.744 [2024-11-25 10:32:19.740180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:27:12.744 [2024-11-25 10:32:19.740189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.744 [2024-11-25 10:32:19.760207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.744 [2024-11-25 10:32:19.760246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:12.744 [2024-11-25 10:32:19.760260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.014 ms 00:27:12.744 [2024-11-25 10:32:19.760270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.744 [2024-11-25 10:32:19.760782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.744 [2024-11-25 10:32:19.760805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:12.744 [2024-11-25 10:32:19.760818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:27:12.744 [2024-11-25 10:32:19.760834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.744 [2024-11-25 10:32:19.813259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.744 [2024-11-25 10:32:19.813301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:12.744 [2024-11-25 10:32:19.813315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.744 [2024-11-25 10:32:19.813325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.744 [2024-11-25 10:32:19.813396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.744 [2024-11-25 10:32:19.813407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:12.744 [2024-11-25 10:32:19.813418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.744 [2024-11-25 10:32:19.813434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.744 [2024-11-25 10:32:19.813541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.744 [2024-11-25 10:32:19.813556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:12.744 [2024-11-25 10:32:19.813568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.744 [2024-11-25 10:32:19.813578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.744 [2024-11-25 10:32:19.813595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.744 [2024-11-25 10:32:19.813606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:12.744 [2024-11-25 10:32:19.813616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.745 [2024-11-25 10:32:19.813626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.004 [2024-11-25 10:32:19.939489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.004 [2024-11-25 10:32:19.939562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:13.004 [2024-11-25 10:32:19.939579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.004 [2024-11-25 10:32:19.939589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.004 [2024-11-25 10:32:20.040528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.004 [2024-11-25 10:32:20.040593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:13.004 [2024-11-25 10:32:20.040608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.004 [2024-11-25 10:32:20.040619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.004 [2024-11-25 10:32:20.040724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.004 [2024-11-25 10:32:20.040737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:13.004 [2024-11-25 10:32:20.040748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.004 [2024-11-25 10:32:20.040758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.004 [2024-11-25 10:32:20.040803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.004 [2024-11-25 10:32:20.040815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:13.004 [2024-11-25 10:32:20.040825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.004 [2024-11-25 10:32:20.040835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.004 [2024-11-25 10:32:20.040958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.004 [2024-11-25 10:32:20.040976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:13.004 [2024-11-25 10:32:20.040987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.004 [2024-11-25 10:32:20.040997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.004 [2024-11-25 10:32:20.041032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.004 [2024-11-25 10:32:20.041044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:13.004 [2024-11-25 10:32:20.041054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.004 [2024-11-25 10:32:20.041064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.004 [2024-11-25 10:32:20.041100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.004 [2024-11-25 10:32:20.041115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:13.004 [2024-11-25 10:32:20.041125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.004 [2024-11-25 10:32:20.041135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.004 [2024-11-25 10:32:20.041174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:13.004 [2024-11-25 10:32:20.041185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:13.004 [2024-11-25 10:32:20.041196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:13.004 [2024-11-25 10:32:20.041205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:13.004 [2024-11-25 10:32:20.041318] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.710 ms, result 0 00:27:14.374 00:27:14.374 00:27:14.374 10:32:21 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:14.374 [2024-11-25 10:32:21.243696] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:27:14.374 [2024-11-25 10:32:21.243818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79592 ] 00:27:14.374 [2024-11-25 10:32:21.423909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.696 [2024-11-25 10:32:21.537257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.954 [2024-11-25 10:32:21.895469] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:14.955 [2024-11-25 10:32:21.895548] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:14.955 [2024-11-25 10:32:22.056285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.955 [2024-11-25 10:32:22.056346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:14.955 [2024-11-25 10:32:22.056362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:14.955 [2024-11-25 10:32:22.056373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.955 [2024-11-25 10:32:22.056420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.955 [2024-11-25 10:32:22.056435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:14.955 [2024-11-25 10:32:22.056446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:14.955 [2024-11-25 10:32:22.056456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.955 [2024-11-25 10:32:22.056476] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:14.955 [2024-11-25 10:32:22.057598] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:14.955 [2024-11-25 10:32:22.057624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.955 [2024-11-25 10:32:22.057636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:14.955 [2024-11-25 10:32:22.057647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:27:14.955 [2024-11-25 10:32:22.057657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.955 [2024-11-25 10:32:22.059075] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:15.213 [2024-11-25 10:32:22.078871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.213 [2024-11-25 10:32:22.078911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:15.213 [2024-11-25 10:32:22.078925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.828 ms 00:27:15.213 [2024-11-25 10:32:22.078936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.213 [2024-11-25 10:32:22.079001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.213 [2024-11-25 10:32:22.079014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:15.213 [2024-11-25 10:32:22.079025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:15.213 [2024-11-25 10:32:22.079036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.213 [2024-11-25 10:32:22.085831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.213 [2024-11-25 10:32:22.085978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:15.213 [2024-11-25 10:32:22.085999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.733 ms 00:27:15.213 [2024-11-25 10:32:22.086015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.213 [2024-11-25 10:32:22.086098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.213 [2024-11-25 10:32:22.086111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:15.213 [2024-11-25 10:32:22.086121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:15.213 [2024-11-25 10:32:22.086132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.213 [2024-11-25 10:32:22.086174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.213 [2024-11-25 10:32:22.086186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:15.213 [2024-11-25 10:32:22.086196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:15.213 [2024-11-25 10:32:22.086207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.213 [2024-11-25 10:32:22.086235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:15.213 [2024-11-25 10:32:22.090997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.214 [2024-11-25 10:32:22.091033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:15.214 [2024-11-25 10:32:22.091049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:27:15.214 [2024-11-25 10:32:22.091059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.214 [2024-11-25 10:32:22.091090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.214 [2024-11-25 10:32:22.091101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:15.214 [2024-11-25 10:32:22.091112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:15.214 [2024-11-25 10:32:22.091122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.214 [2024-11-25 10:32:22.091177] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:15.214 [2024-11-25 10:32:22.091201] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:15.214 [2024-11-25 10:32:22.091234] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:15.214 [2024-11-25 10:32:22.091256] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:15.214 [2024-11-25 10:32:22.091343] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:15.214 [2024-11-25 10:32:22.091356] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:15.214 [2024-11-25 10:32:22.091370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:15.214 [2024-11-25 10:32:22.091383] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:15.214 [2024-11-25 10:32:22.091395] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:15.214 [2024-11-25 10:32:22.091406] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:15.214 [2024-11-25 10:32:22.091417] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:15.214 [2024-11-25 10:32:22.091427] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:15.214 [2024-11-25 10:32:22.091440] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:15.214 [2024-11-25 10:32:22.091450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.214 [2024-11-25 10:32:22.091460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:15.214 [2024-11-25 10:32:22.091471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:27:15.214 [2024-11-25 10:32:22.091481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.214 [2024-11-25 10:32:22.091572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.214 [2024-11-25 10:32:22.091585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:15.214 [2024-11-25 10:32:22.091595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:15.214 [2024-11-25 10:32:22.091605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.214 [2024-11-25 10:32:22.091701] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:15.214 [2024-11-25 10:32:22.091717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:15.214 [2024-11-25 10:32:22.091727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:15.214 [2024-11-25 10:32:22.091738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.214 [2024-11-25 10:32:22.091749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:15.214 [2024-11-25 10:32:22.091758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:15.214 [2024-11-25 10:32:22.091768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:15.214 [2024-11-25 10:32:22.091778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:15.214 [2024-11-25 10:32:22.091788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:15.214 [2024-11-25 10:32:22.091797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:15.214 [2024-11-25 10:32:22.091807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:15.214 [2024-11-25 10:32:22.091817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:15.214 [2024-11-25 10:32:22.091827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:15.214 [2024-11-25 10:32:22.091846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:15.214 [2024-11-25 10:32:22.091856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:15.214 [2024-11-25 10:32:22.091865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.214 [2024-11-25 10:32:22.091874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:15.214 [2024-11-25 10:32:22.091884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:15.214 [2024-11-25 10:32:22.091893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.214 [2024-11-25 10:32:22.091902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:15.214 [2024-11-25 10:32:22.091911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:15.214 [2024-11-25 10:32:22.091921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.214 [2024-11-25 10:32:22.091930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:15.214 [2024-11-25 10:32:22.091939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:15.214 [2024-11-25 10:32:22.091948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.214 [2024-11-25 10:32:22.091957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:15.214 [2024-11-25 10:32:22.091966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:15.214 [2024-11-25 10:32:22.091975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.214 [2024-11-25 10:32:22.091984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:15.214 [2024-11-25 10:32:22.091993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:15.214 [2024-11-25 10:32:22.092002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.214 [2024-11-25 10:32:22.092011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:15.214 [2024-11-25 10:32:22.092020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:15.214 [2024-11-25 10:32:22.092029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:15.214 [2024-11-25 10:32:22.092038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:15.214 [2024-11-25 10:32:22.092047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:15.214 [2024-11-25 10:32:22.092055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:15.214 [2024-11-25 10:32:22.092065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:15.214 [2024-11-25 10:32:22.092074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:15.214 [2024-11-25 10:32:22.092083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.214 [2024-11-25 10:32:22.092092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:15.214 [2024-11-25 10:32:22.092101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:15.214 [2024-11-25 10:32:22.092111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.214 [2024-11-25 10:32:22.092120] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:15.214 [2024-11-25 10:32:22.092130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:15.214 [2024-11-25 10:32:22.092140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:15.214 [2024-11-25 10:32:22.092149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.214 [2024-11-25 10:32:22.092159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:15.214 [2024-11-25 10:32:22.092168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:15.214 [2024-11-25 10:32:22.092177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:15.214 [2024-11-25 10:32:22.092187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:15.214 [2024-11-25 10:32:22.092196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:15.214 [2024-11-25 10:32:22.092205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:15.215 [2024-11-25 10:32:22.092216] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:15.215 [2024-11-25 10:32:22.092228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.215 [2024-11-25 10:32:22.092243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:15.215 [2024-11-25 10:32:22.092254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:15.215 [2024-11-25 10:32:22.092264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:15.215 [2024-11-25 10:32:22.092274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:15.215 [2024-11-25 10:32:22.092285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:15.215 [2024-11-25 10:32:22.092295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:15.215 [2024-11-25 10:32:22.092305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:15.215 [2024-11-25 10:32:22.092315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:15.215 [2024-11-25 10:32:22.092325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:15.215 [2024-11-25 10:32:22.092335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:15.215 [2024-11-25 10:32:22.092345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:15.215 [2024-11-25 10:32:22.092355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:15.215 [2024-11-25 10:32:22.092365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:15.215 [2024-11-25 10:32:22.092375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:15.215 [2024-11-25 10:32:22.092386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:15.215 [2024-11-25 10:32:22.092397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.215 [2024-11-25 10:32:22.092409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:15.215 [2024-11-25 10:32:22.092419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:15.215 [2024-11-25 10:32:22.092430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:15.215 [2024-11-25 10:32:22.092441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:15.215 [2024-11-25 10:32:22.092451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.092462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:15.215 [2024-11-25 10:32:22.092472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:27:15.215 [2024-11-25 10:32:22.092482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.131968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.132129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:15.215 [2024-11-25 10:32:22.132214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.494 ms 00:27:15.215 [2024-11-25 10:32:22.132259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.132361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.132446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:15.215 [2024-11-25 10:32:22.132482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:15.215 [2024-11-25 10:32:22.132534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.193142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.193315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:15.215 [2024-11-25 10:32:22.193480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.554 ms 00:27:15.215 [2024-11-25 10:32:22.193537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.193595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.193629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:15.215 [2024-11-25 10:32:22.193725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:15.215 [2024-11-25 10:32:22.193760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.194272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.194388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:15.215 [2024-11-25 10:32:22.194466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:27:15.215 [2024-11-25 10:32:22.194521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.194738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.194776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:15.215 [2024-11-25 10:32:22.194814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:27:15.215 [2024-11-25 10:32:22.194842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.213195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.213339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:15.215 [2024-11-25 10:32:22.213422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.284 ms 00:27:15.215 [2024-11-25 10:32:22.213457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.232504] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:15.215 [2024-11-25 10:32:22.232677] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:15.215 [2024-11-25 10:32:22.232775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.232807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:15.215 [2024-11-25 10:32:22.232838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.215 ms 00:27:15.215 [2024-11-25 10:32:22.232867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.262057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.262196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:15.215 [2024-11-25 10:32:22.262279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.181 ms 00:27:15.215 [2024-11-25 10:32:22.262314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.280265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.280424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:15.215 [2024-11-25 10:32:22.280580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.851 ms 00:27:15.215 [2024-11-25 10:32:22.280620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.298185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.298324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:15.215 [2024-11-25 10:32:22.298405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.537 ms 00:27:15.215 [2024-11-25 10:32:22.298441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.215 [2024-11-25 10:32:22.299301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.215 [2024-11-25 10:32:22.299335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:15.215 [2024-11-25 10:32:22.299351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:27:15.215 [2024-11-25 10:32:22.299361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.473 [2024-11-25 10:32:22.384068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.473 [2024-11-25 10:32:22.384127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:15.473 [2024-11-25 10:32:22.384149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.822 ms 00:27:15.473 [2024-11-25 10:32:22.384160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.473 [2024-11-25 10:32:22.395077] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:15.473 [2024-11-25 10:32:22.398067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.473 [2024-11-25 10:32:22.398198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:15.473 [2024-11-25 10:32:22.398271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.880 ms 00:27:15.473 [2024-11-25 10:32:22.398307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.473 [2024-11-25 10:32:22.398419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.473 [2024-11-25 10:32:22.398457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:15.473 [2024-11-25 10:32:22.398488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:15.473 [2024-11-25 10:32:22.398535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.473 [2024-11-25 10:32:22.398723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.473 [2024-11-25 10:32:22.398742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:15.473 [2024-11-25 10:32:22.398755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:15.473 [2024-11-25 10:32:22.398765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.473 [2024-11-25 10:32:22.398790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.473 [2024-11-25 10:32:22.398801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:15.473 [2024-11-25 10:32:22.398811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:15.473 [2024-11-25 10:32:22.398821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.473 [2024-11-25 10:32:22.398855] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:15.473 [2024-11-25 10:32:22.398868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.473 [2024-11-25 10:32:22.398878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:15.473 [2024-11-25 10:32:22.398887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:15.473 [2024-11-25 10:32:22.398897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.473 [2024-11-25 10:32:22.436652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.473 [2024-11-25 10:32:22.436797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:15.473 [2024-11-25 10:32:22.436875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.793 ms 00:27:15.473 [2024-11-25 10:32:22.436919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.473 [2024-11-25 10:32:22.437071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.473 [2024-11-25 10:32:22.437114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:15.473 [2024-11-25 10:32:22.437208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:15.473 [2024-11-25 10:32:22.437245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.473 [2024-11-25 10:32:22.438390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 382.274 ms, result 0 00:27:16.847  [2024-11-25T10:32:24.894Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-25T10:32:25.831Z] Copying: 55/1024 [MB] (27 MBps) [2024-11-25T10:32:26.767Z] Copying: 82/1024 [MB] (27 MBps) [2024-11-25T10:32:27.705Z] Copying: 109/1024 [MB] (27 MBps) [2024-11-25T10:32:29.084Z] Copying: 136/1024 [MB] (27 MBps) [2024-11-25T10:32:29.653Z] Copying: 163/1024 [MB] (27 MBps) [2024-11-25T10:32:31.033Z] Copying: 191/1024 [MB] (27 MBps) [2024-11-25T10:32:31.972Z] Copying: 218/1024 [MB] (27 MBps) [2024-11-25T10:32:32.910Z] Copying: 246/1024 [MB] (27 MBps) [2024-11-25T10:32:33.849Z] Copying: 274/1024 [MB] (28 MBps) [2024-11-25T10:32:34.814Z] Copying: 302/1024 [MB] (28 MBps) [2024-11-25T10:32:35.753Z] Copying: 331/1024 [MB] (28 MBps) [2024-11-25T10:32:36.691Z] Copying: 359/1024 [MB] (27 MBps) [2024-11-25T10:32:38.068Z] Copying: 389/1024 [MB] (29 MBps) [2024-11-25T10:32:39.006Z] Copying: 417/1024 [MB] (28 MBps) [2024-11-25T10:32:39.943Z] Copying: 445/1024 [MB] (27 MBps) [2024-11-25T10:32:40.881Z] Copying: 473/1024 [MB] (27 MBps) [2024-11-25T10:32:41.828Z] Copying: 499/1024 [MB] (26 MBps) [2024-11-25T10:32:42.766Z] Copying: 527/1024 [MB] (27 MBps) [2024-11-25T10:32:43.709Z] Copying: 555/1024 [MB] (28 MBps) [2024-11-25T10:32:44.647Z] Copying: 583/1024 [MB] (27 MBps) [2024-11-25T10:32:46.024Z] Copying: 610/1024 [MB] (27 MBps) [2024-11-25T10:32:46.963Z] Copying: 638/1024 [MB] (27 MBps) [2024-11-25T10:32:47.900Z] Copying: 666/1024 [MB] (27 MBps) [2024-11-25T10:32:48.839Z] Copying: 694/1024 [MB] (28 MBps) [2024-11-25T10:32:49.781Z] Copying: 722/1024 [MB] (28 MBps) [2024-11-25T10:32:50.719Z] Copying: 751/1024 [MB] (28 MBps) [2024-11-25T10:32:51.655Z] Copying: 781/1024 [MB] (29 MBps) [2024-11-25T10:32:53.034Z] Copying: 810/1024 [MB] (29 MBps) [2024-11-25T10:32:53.972Z] Copying: 840/1024 [MB] (29 MBps) [2024-11-25T10:32:54.910Z] Copying: 868/1024 [MB] (28 MBps) [2024-11-25T10:32:55.847Z] Copying: 898/1024 [MB] (29 MBps) [2024-11-25T10:32:56.827Z] Copying: 926/1024 [MB] (28 MBps) [2024-11-25T10:32:57.762Z] Copying: 954/1024 [MB] (27 MBps) [2024-11-25T10:32:58.697Z] Copying: 984/1024 [MB] (30 MBps) [2024-11-25T10:32:59.266Z] Copying: 1013/1024 [MB] (29 MBps) [2024-11-25T10:32:59.266Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-25 10:32:58.976446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.154 [2024-11-25 10:32:58.976516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:52.154 [2024-11-25 10:32:58.976533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:52.154 [2024-11-25 10:32:58.976543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.154 [2024-11-25 10:32:58.976565] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:52.154 [2024-11-25 10:32:58.980949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.154 [2024-11-25 10:32:58.980988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:52.154 [2024-11-25 10:32:58.981008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.373 ms 00:27:52.154 [2024-11-25 10:32:58.981029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.154 [2024-11-25 10:32:58.981229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.154 [2024-11-25 10:32:58.981242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:52.154 [2024-11-25 10:32:58.981253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:27:52.154 [2024-11-25 10:32:58.981263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.154 [2024-11-25 10:32:58.984418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.154 [2024-11-25 10:32:58.984443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:52.154 [2024-11-25 10:32:58.984455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:27:52.154 [2024-11-25 10:32:58.984470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.154 [2024-11-25 10:32:58.989789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.154 [2024-11-25 10:32:58.989828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:52.154 [2024-11-25 10:32:58.989840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.308 ms 00:27:52.154 [2024-11-25 10:32:58.989851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.154 [2024-11-25 10:32:59.026055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.155 [2024-11-25 10:32:59.026095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:52.155 [2024-11-25 10:32:59.026109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.218 ms 00:27:52.155 [2024-11-25 10:32:59.026119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.155 [2024-11-25 10:32:59.047039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.155 [2024-11-25 10:32:59.047216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:52.155 [2024-11-25 10:32:59.047238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.916 ms 00:27:52.155 [2024-11-25 10:32:59.047249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.155 [2024-11-25 10:32:59.047373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.155 [2024-11-25 10:32:59.047387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:52.155 [2024-11-25 10:32:59.047397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:52.155 [2024-11-25 10:32:59.047407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.155 [2024-11-25 10:32:59.083893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.155 [2024-11-25 10:32:59.083929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:52.155 [2024-11-25 10:32:59.083942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.529 ms 00:27:52.155 [2024-11-25 10:32:59.083951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.155 [2024-11-25 10:32:59.119645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.155 [2024-11-25 10:32:59.119805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:52.155 [2024-11-25 10:32:59.119825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.698 ms 00:27:52.155 [2024-11-25 10:32:59.119835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.155 [2024-11-25 10:32:59.155453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.155 [2024-11-25 10:32:59.155623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:52.155 [2024-11-25 10:32:59.155645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.642 ms 00:27:52.155 [2024-11-25 10:32:59.155655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.155 [2024-11-25 10:32:59.191388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.155 [2024-11-25 10:32:59.191551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:52.155 [2024-11-25 10:32:59.191571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.720 ms 00:27:52.155 [2024-11-25 10:32:59.191583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.155 [2024-11-25 10:32:59.191625] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:52.155 [2024-11-25 10:32:59.191648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.191992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:52.155 [2024-11-25 10:32:59.192210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:52.156 [2024-11-25 10:32:59.192717] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:52.156 [2024-11-25 10:32:59.192730] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f430ca0c-e16a-40b5-83da-f68ac69b1b9c 00:27:52.156 [2024-11-25 10:32:59.192741] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:52.156 [2024-11-25 10:32:59.192752] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:52.156 [2024-11-25 10:32:59.192762] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:52.156 [2024-11-25 10:32:59.192772] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:52.156 [2024-11-25 10:32:59.192791] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:52.156 [2024-11-25 10:32:59.192801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:52.156 [2024-11-25 10:32:59.192818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:52.156 [2024-11-25 10:32:59.192827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:52.156 [2024-11-25 10:32:59.192836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:52.156 [2024-11-25 10:32:59.192846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.156 [2024-11-25 10:32:59.192856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:52.156 [2024-11-25 10:32:59.192866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:27:52.156 [2024-11-25 10:32:59.192876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.156 [2024-11-25 10:32:59.212180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.156 [2024-11-25 10:32:59.212215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:52.156 [2024-11-25 10:32:59.212228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.297 ms 00:27:52.156 [2024-11-25 10:32:59.212238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.156 [2024-11-25 10:32:59.212784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.156 [2024-11-25 10:32:59.212802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:52.156 [2024-11-25 10:32:59.212819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:27:52.156 [2024-11-25 10:32:59.212828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.415 [2024-11-25 10:32:59.264113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.415 [2024-11-25 10:32:59.264158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:52.415 [2024-11-25 10:32:59.264172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.415 [2024-11-25 10:32:59.264183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.415 [2024-11-25 10:32:59.264242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.415 [2024-11-25 10:32:59.264252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:52.415 [2024-11-25 10:32:59.264268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.415 [2024-11-25 10:32:59.264278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.415 [2024-11-25 10:32:59.264368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.415 [2024-11-25 10:32:59.264382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:52.415 [2024-11-25 10:32:59.264392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.415 [2024-11-25 10:32:59.264402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.415 [2024-11-25 10:32:59.264420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.415 [2024-11-25 10:32:59.264431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:52.415 [2024-11-25 10:32:59.264441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.415 [2024-11-25 10:32:59.264454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.415 [2024-11-25 10:32:59.387796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.415 [2024-11-25 10:32:59.387853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:52.415 [2024-11-25 10:32:59.387867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.415 [2024-11-25 10:32:59.387879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.415 [2024-11-25 10:32:59.490050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.415 [2024-11-25 10:32:59.490099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:52.415 [2024-11-25 10:32:59.490114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.415 [2024-11-25 10:32:59.490131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.415 [2024-11-25 10:32:59.490236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.415 [2024-11-25 10:32:59.490248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:52.415 [2024-11-25 10:32:59.490259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.415 [2024-11-25 10:32:59.490269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.415 [2024-11-25 10:32:59.490314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.415 [2024-11-25 10:32:59.490326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:52.415 [2024-11-25 10:32:59.490336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.415 [2024-11-25 10:32:59.490346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.415 [2024-11-25 10:32:59.490476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.415 [2024-11-25 10:32:59.490509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:52.416 [2024-11-25 10:32:59.490521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.416 [2024-11-25 10:32:59.490544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.416 [2024-11-25 10:32:59.490582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.416 [2024-11-25 10:32:59.490594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:52.416 [2024-11-25 10:32:59.490604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.416 [2024-11-25 10:32:59.490614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.416 [2024-11-25 10:32:59.490655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.416 [2024-11-25 10:32:59.490666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:52.416 [2024-11-25 10:32:59.490676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.416 [2024-11-25 10:32:59.490686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.416 [2024-11-25 10:32:59.490726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.416 [2024-11-25 10:32:59.490738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:52.416 [2024-11-25 10:32:59.490747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.416 [2024-11-25 10:32:59.490757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.416 [2024-11-25 10:32:59.490871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 515.240 ms, result 0 00:27:53.794 00:27:53.794 00:27:53.794 10:33:00 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:55.171 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:55.171 10:33:02 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:55.431 [2024-11-25 10:33:02.353510] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:27:55.431 [2024-11-25 10:33:02.353826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80011 ] 00:27:55.431 [2024-11-25 10:33:02.534748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.689 [2024-11-25 10:33:02.655395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.948 [2024-11-25 10:33:03.011481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:55.948 [2024-11-25 10:33:03.011558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:56.208 [2024-11-25 10:33:03.171658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.208 [2024-11-25 10:33:03.171718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:56.208 [2024-11-25 10:33:03.171734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:56.208 [2024-11-25 10:33:03.171744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.208 [2024-11-25 10:33:03.171793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.208 [2024-11-25 10:33:03.171808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:56.208 [2024-11-25 10:33:03.171819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:56.208 [2024-11-25 10:33:03.171829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.208 [2024-11-25 10:33:03.171850] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:56.208 [2024-11-25 10:33:03.172786] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:56.208 [2024-11-25 10:33:03.172809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.208 [2024-11-25 10:33:03.172820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:56.208 [2024-11-25 10:33:03.172831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:27:56.208 [2024-11-25 10:33:03.172841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.208 [2024-11-25 10:33:03.174229] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:56.208 [2024-11-25 10:33:03.194832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.208 [2024-11-25 10:33:03.194874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:56.208 [2024-11-25 10:33:03.194888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.636 ms 00:27:56.208 [2024-11-25 10:33:03.194899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.208 [2024-11-25 10:33:03.194962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.208 [2024-11-25 10:33:03.194975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:56.208 [2024-11-25 10:33:03.194986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:56.208 [2024-11-25 10:33:03.194996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.208 [2024-11-25 10:33:03.201698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.208 [2024-11-25 10:33:03.201730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:56.208 [2024-11-25 10:33:03.201746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.642 ms 00:27:56.208 [2024-11-25 10:33:03.201756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.209 [2024-11-25 10:33:03.201832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.209 [2024-11-25 10:33:03.201846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:56.209 [2024-11-25 10:33:03.201856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:56.209 [2024-11-25 10:33:03.201867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.209 [2024-11-25 10:33:03.201908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.209 [2024-11-25 10:33:03.201920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:56.209 [2024-11-25 10:33:03.201930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:56.209 [2024-11-25 10:33:03.201944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.209 [2024-11-25 10:33:03.201970] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:56.209 [2024-11-25 10:33:03.206522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.209 [2024-11-25 10:33:03.206557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:56.209 [2024-11-25 10:33:03.206569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.566 ms 00:27:56.209 [2024-11-25 10:33:03.206579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.209 [2024-11-25 10:33:03.206609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.209 [2024-11-25 10:33:03.206619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:56.209 [2024-11-25 10:33:03.206630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:56.209 [2024-11-25 10:33:03.206639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.209 [2024-11-25 10:33:03.206692] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:56.209 [2024-11-25 10:33:03.206715] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:56.209 [2024-11-25 10:33:03.206753] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:56.209 [2024-11-25 10:33:03.206770] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:56.209 [2024-11-25 10:33:03.206856] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:56.209 [2024-11-25 10:33:03.206870] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:56.209 [2024-11-25 10:33:03.206882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:56.209 [2024-11-25 10:33:03.206895] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:56.209 [2024-11-25 10:33:03.206906] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:56.209 [2024-11-25 10:33:03.206917] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:56.209 [2024-11-25 10:33:03.206927] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:56.209 [2024-11-25 10:33:03.206940] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:56.209 [2024-11-25 10:33:03.206950] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:56.209 [2024-11-25 10:33:03.206960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.209 [2024-11-25 10:33:03.206970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:56.209 [2024-11-25 10:33:03.206980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:27:56.209 [2024-11-25 10:33:03.206990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.209 [2024-11-25 10:33:03.207064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.209 [2024-11-25 10:33:03.207074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:56.209 [2024-11-25 10:33:03.207084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:56.209 [2024-11-25 10:33:03.207093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.209 [2024-11-25 10:33:03.207189] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:56.209 [2024-11-25 10:33:03.207208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:56.209 [2024-11-25 10:33:03.207219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:56.209 [2024-11-25 10:33:03.207229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:56.209 [2024-11-25 10:33:03.207249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:56.209 [2024-11-25 10:33:03.207268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:56.209 [2024-11-25 10:33:03.207278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:56.209 [2024-11-25 10:33:03.207297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:56.209 [2024-11-25 10:33:03.207306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:56.209 [2024-11-25 10:33:03.207316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:56.209 [2024-11-25 10:33:03.207334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:56.209 [2024-11-25 10:33:03.207344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:56.209 [2024-11-25 10:33:03.207353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:56.209 [2024-11-25 10:33:03.207372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:56.209 [2024-11-25 10:33:03.207381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:56.209 [2024-11-25 10:33:03.207400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.209 [2024-11-25 10:33:03.207418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:56.209 [2024-11-25 10:33:03.207427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.209 [2024-11-25 10:33:03.207445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:56.209 [2024-11-25 10:33:03.207455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.209 [2024-11-25 10:33:03.207473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:56.209 [2024-11-25 10:33:03.207482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.209 [2024-11-25 10:33:03.207522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:56.209 [2024-11-25 10:33:03.207532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:56.209 [2024-11-25 10:33:03.207550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:56.209 [2024-11-25 10:33:03.207559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:56.209 [2024-11-25 10:33:03.207568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:56.209 [2024-11-25 10:33:03.207578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:56.209 [2024-11-25 10:33:03.207587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:56.209 [2024-11-25 10:33:03.207596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:56.209 [2024-11-25 10:33:03.207615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:56.209 [2024-11-25 10:33:03.207625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207634] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:56.209 [2024-11-25 10:33:03.207645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:56.209 [2024-11-25 10:33:03.207654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:56.209 [2024-11-25 10:33:03.207664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.209 [2024-11-25 10:33:03.207674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:56.209 [2024-11-25 10:33:03.207684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:56.209 [2024-11-25 10:33:03.207693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:56.209 [2024-11-25 10:33:03.207703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:56.209 [2024-11-25 10:33:03.207711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:56.209 [2024-11-25 10:33:03.207721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:56.209 [2024-11-25 10:33:03.207731] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:56.209 [2024-11-25 10:33:03.207744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.209 [2024-11-25 10:33:03.207759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:56.209 [2024-11-25 10:33:03.207770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:56.209 [2024-11-25 10:33:03.207781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:56.209 [2024-11-25 10:33:03.207791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:56.209 [2024-11-25 10:33:03.207801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:56.209 [2024-11-25 10:33:03.207811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:56.209 [2024-11-25 10:33:03.207821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:56.209 [2024-11-25 10:33:03.207832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:56.209 [2024-11-25 10:33:03.207842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:56.210 [2024-11-25 10:33:03.207852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:56.210 [2024-11-25 10:33:03.207861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:56.210 [2024-11-25 10:33:03.207872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:56.210 [2024-11-25 10:33:03.207882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:56.210 [2024-11-25 10:33:03.207892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:56.210 [2024-11-25 10:33:03.207901] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:56.210 [2024-11-25 10:33:03.207913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.210 [2024-11-25 10:33:03.207923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:56.210 [2024-11-25 10:33:03.207933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:56.210 [2024-11-25 10:33:03.207943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:56.210 [2024-11-25 10:33:03.207953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:56.210 [2024-11-25 10:33:03.207964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.210 [2024-11-25 10:33:03.207975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:56.210 [2024-11-25 10:33:03.207985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:27:56.210 [2024-11-25 10:33:03.207994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.210 [2024-11-25 10:33:03.247269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.210 [2024-11-25 10:33:03.247307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:56.210 [2024-11-25 10:33:03.247324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.293 ms 00:27:56.210 [2024-11-25 10:33:03.247335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.210 [2024-11-25 10:33:03.247413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.210 [2024-11-25 10:33:03.247425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:56.210 [2024-11-25 10:33:03.247436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:56.210 [2024-11-25 10:33:03.247449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.210 [2024-11-25 10:33:03.302780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.210 [2024-11-25 10:33:03.302957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:56.210 [2024-11-25 10:33:03.302979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.349 ms 00:27:56.210 [2024-11-25 10:33:03.302990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.210 [2024-11-25 10:33:03.303026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.210 [2024-11-25 10:33:03.303043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:56.210 [2024-11-25 10:33:03.303053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:56.210 [2024-11-25 10:33:03.303063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.210 [2024-11-25 10:33:03.303562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.210 [2024-11-25 10:33:03.303577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:56.210 [2024-11-25 10:33:03.303588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:27:56.210 [2024-11-25 10:33:03.303598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.210 [2024-11-25 10:33:03.303717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.210 [2024-11-25 10:33:03.303731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:56.210 [2024-11-25 10:33:03.303746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:27:56.210 [2024-11-25 10:33:03.303756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.469 [2024-11-25 10:33:03.322225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.469 [2024-11-25 10:33:03.322377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.469 [2024-11-25 10:33:03.322398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.478 ms 00:27:56.469 [2024-11-25 10:33:03.322409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.469 [2024-11-25 10:33:03.341662] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:56.469 [2024-11-25 10:33:03.341700] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:56.469 [2024-11-25 10:33:03.341716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.469 [2024-11-25 10:33:03.341726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:56.469 [2024-11-25 10:33:03.341737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.215 ms 00:27:56.469 [2024-11-25 10:33:03.341747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.469 [2024-11-25 10:33:03.371450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.469 [2024-11-25 10:33:03.371621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:56.469 [2024-11-25 10:33:03.371646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.710 ms 00:27:56.469 [2024-11-25 10:33:03.371658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.469 [2024-11-25 10:33:03.389884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.469 [2024-11-25 10:33:03.389923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:56.469 [2024-11-25 10:33:03.389937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.105 ms 00:27:56.469 [2024-11-25 10:33:03.389947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.469 [2024-11-25 10:33:03.407760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.469 [2024-11-25 10:33:03.407796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:56.469 [2024-11-25 10:33:03.407809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.803 ms 00:27:56.469 [2024-11-25 10:33:03.407834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.469 [2024-11-25 10:33:03.408647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.470 [2024-11-25 10:33:03.408673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:56.470 [2024-11-25 10:33:03.408685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:27:56.470 [2024-11-25 10:33:03.408696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.470 [2024-11-25 10:33:03.495546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.470 [2024-11-25 10:33:03.495617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:56.470 [2024-11-25 10:33:03.495633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.969 ms 00:27:56.470 [2024-11-25 10:33:03.495660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.470 [2024-11-25 10:33:03.506220] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:56.470 [2024-11-25 10:33:03.508861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.470 [2024-11-25 10:33:03.508888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:56.470 [2024-11-25 10:33:03.508902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.170 ms 00:27:56.470 [2024-11-25 10:33:03.508928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.470 [2024-11-25 10:33:03.509011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.470 [2024-11-25 10:33:03.509024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:56.470 [2024-11-25 10:33:03.509039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:56.470 [2024-11-25 10:33:03.509049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.470 [2024-11-25 10:33:03.509136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.470 [2024-11-25 10:33:03.509149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:56.470 [2024-11-25 10:33:03.509160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:56.470 [2024-11-25 10:33:03.509170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.470 [2024-11-25 10:33:03.509195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.470 [2024-11-25 10:33:03.509206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:56.470 [2024-11-25 10:33:03.509216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:56.470 [2024-11-25 10:33:03.509229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.470 [2024-11-25 10:33:03.509259] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:56.470 [2024-11-25 10:33:03.509271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.470 [2024-11-25 10:33:03.509281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:56.470 [2024-11-25 10:33:03.509291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:56.470 [2024-11-25 10:33:03.509301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.470 [2024-11-25 10:33:03.546567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.470 [2024-11-25 10:33:03.546718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:56.470 [2024-11-25 10:33:03.546871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.304 ms 00:27:56.470 [2024-11-25 10:33:03.546909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.470 [2024-11-25 10:33:03.547003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.470 [2024-11-25 10:33:03.547091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:56.470 [2024-11-25 10:33:03.547127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:56.470 [2024-11-25 10:33:03.547157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.470 [2024-11-25 10:33:03.548342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.858 ms, result 0 00:27:57.848  [2024-11-25T10:33:05.897Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-25T10:33:06.833Z] Copying: 53/1024 [MB] (27 MBps) [2024-11-25T10:33:07.769Z] Copying: 79/1024 [MB] (26 MBps) [2024-11-25T10:33:08.707Z] Copying: 105/1024 [MB] (26 MBps) [2024-11-25T10:33:09.727Z] Copying: 130/1024 [MB] (25 MBps) [2024-11-25T10:33:10.664Z] Copying: 156/1024 [MB] (25 MBps) [2024-11-25T10:33:11.601Z] Copying: 183/1024 [MB] (26 MBps) [2024-11-25T10:33:12.978Z] Copying: 209/1024 [MB] (26 MBps) [2024-11-25T10:33:13.547Z] Copying: 235/1024 [MB] (25 MBps) [2024-11-25T10:33:14.926Z] Copying: 262/1024 [MB] (26 MBps) [2024-11-25T10:33:15.912Z] Copying: 289/1024 [MB] (27 MBps) [2024-11-25T10:33:16.849Z] Copying: 314/1024 [MB] (25 MBps) [2024-11-25T10:33:17.786Z] Copying: 340/1024 [MB] (26 MBps) [2024-11-25T10:33:18.722Z] Copying: 367/1024 [MB] (26 MBps) [2024-11-25T10:33:19.659Z] Copying: 393/1024 [MB] (26 MBps) [2024-11-25T10:33:20.597Z] Copying: 419/1024 [MB] (26 MBps) [2024-11-25T10:33:21.534Z] Copying: 446/1024 [MB] (27 MBps) [2024-11-25T10:33:22.931Z] Copying: 472/1024 [MB] (26 MBps) [2024-11-25T10:33:23.868Z] Copying: 498/1024 [MB] (25 MBps) [2024-11-25T10:33:24.804Z] Copying: 523/1024 [MB] (25 MBps) [2024-11-25T10:33:25.741Z] Copying: 550/1024 [MB] (26 MBps) [2024-11-25T10:33:26.676Z] Copying: 575/1024 [MB] (25 MBps) [2024-11-25T10:33:27.617Z] Copying: 601/1024 [MB] (26 MBps) [2024-11-25T10:33:28.590Z] Copying: 628/1024 [MB] (27 MBps) [2024-11-25T10:33:29.526Z] Copying: 655/1024 [MB] (26 MBps) [2024-11-25T10:33:30.904Z] Copying: 682/1024 [MB] (26 MBps) [2024-11-25T10:33:31.839Z] Copying: 708/1024 [MB] (26 MBps) [2024-11-25T10:33:32.776Z] Copying: 736/1024 [MB] (27 MBps) [2024-11-25T10:33:33.712Z] Copying: 761/1024 [MB] (25 MBps) [2024-11-25T10:33:34.728Z] Copying: 787/1024 [MB] (25 MBps) [2024-11-25T10:33:35.666Z] Copying: 813/1024 [MB] (26 MBps) [2024-11-25T10:33:36.602Z] Copying: 839/1024 [MB] (25 MBps) [2024-11-25T10:33:37.537Z] Copying: 865/1024 [MB] (25 MBps) [2024-11-25T10:33:38.912Z] Copying: 891/1024 [MB] (26 MBps) [2024-11-25T10:33:39.847Z] Copying: 917/1024 [MB] (25 MBps) [2024-11-25T10:33:40.783Z] Copying: 943/1024 [MB] (26 MBps) [2024-11-25T10:33:41.752Z] Copying: 971/1024 [MB] (27 MBps) [2024-11-25T10:33:42.689Z] Copying: 998/1024 [MB] (27 MBps) [2024-11-25T10:33:43.286Z] Copying: 1023/1024 [MB] (24 MBps) [2024-11-25T10:33:43.286Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-25 10:33:43.183277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.174 [2024-11-25 10:33:43.183341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:36.174 [2024-11-25 10:33:43.183365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:36.174 [2024-11-25 10:33:43.183376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.174 [2024-11-25 10:33:43.185020] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:36.174 [2024-11-25 10:33:43.191734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.174 [2024-11-25 10:33:43.191773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:36.174 [2024-11-25 10:33:43.191786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.683 ms 00:28:36.174 [2024-11-25 10:33:43.191795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.174 [2024-11-25 10:33:43.202369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.174 [2024-11-25 10:33:43.202412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:36.174 [2024-11-25 10:33:43.202434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.743 ms 00:28:36.174 [2024-11-25 10:33:43.202444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.174 [2024-11-25 10:33:43.220269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.174 [2024-11-25 10:33:43.220310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:36.174 [2024-11-25 10:33:43.220324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.836 ms 00:28:36.174 [2024-11-25 10:33:43.220337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.174 [2024-11-25 10:33:43.225394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.174 [2024-11-25 10:33:43.225428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:36.174 [2024-11-25 10:33:43.225440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.032 ms 00:28:36.174 [2024-11-25 10:33:43.225457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.174 [2024-11-25 10:33:43.261347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.174 [2024-11-25 10:33:43.261407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:36.174 [2024-11-25 10:33:43.261420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.893 ms 00:28:36.174 [2024-11-25 10:33:43.261431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.174 [2024-11-25 10:33:43.282042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.174 [2024-11-25 10:33:43.282080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:36.174 [2024-11-25 10:33:43.282094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.608 ms 00:28:36.174 [2024-11-25 10:33:43.282105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.434 [2024-11-25 10:33:43.394714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.434 [2024-11-25 10:33:43.394755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:36.434 [2024-11-25 10:33:43.394768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.752 ms 00:28:36.434 [2024-11-25 10:33:43.394779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.434 [2024-11-25 10:33:43.431937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.434 [2024-11-25 10:33:43.432116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:36.434 [2024-11-25 10:33:43.432138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.199 ms 00:28:36.434 [2024-11-25 10:33:43.432149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.434 [2024-11-25 10:33:43.468341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.434 [2024-11-25 10:33:43.468385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:36.434 [2024-11-25 10:33:43.468399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.211 ms 00:28:36.434 [2024-11-25 10:33:43.468409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.434 [2024-11-25 10:33:43.503867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.434 [2024-11-25 10:33:43.503908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:36.434 [2024-11-25 10:33:43.503922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.476 ms 00:28:36.434 [2024-11-25 10:33:43.503932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.434 [2024-11-25 10:33:43.540303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.434 [2024-11-25 10:33:43.540344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:36.434 [2024-11-25 10:33:43.540358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.347 ms 00:28:36.434 [2024-11-25 10:33:43.540368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.434 [2024-11-25 10:33:43.540406] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:36.434 [2024-11-25 10:33:43.540422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100608 / 261120 wr_cnt: 1 state: open 00:28:36.434 [2024-11-25 10:33:43.540436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:36.434 [2024-11-25 10:33:43.540680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.540991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:36.435 [2024-11-25 10:33:43.541414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:36.436 [2024-11-25 10:33:43.541424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:36.436 [2024-11-25 10:33:43.541435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:36.436 [2024-11-25 10:33:43.541446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:36.436 [2024-11-25 10:33:43.541456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:36.436 [2024-11-25 10:33:43.541466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:36.436 [2024-11-25 10:33:43.541476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:36.436 [2024-11-25 10:33:43.541487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:36.436 [2024-11-25 10:33:43.541513] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:36.436 [2024-11-25 10:33:43.541524] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f430ca0c-e16a-40b5-83da-f68ac69b1b9c 00:28:36.436 [2024-11-25 10:33:43.541535] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100608 00:28:36.436 [2024-11-25 10:33:43.541545] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101568 00:28:36.436 [2024-11-25 10:33:43.541555] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100608 00:28:36.436 [2024-11-25 10:33:43.541565] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:28:36.436 [2024-11-25 10:33:43.541591] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:36.436 [2024-11-25 10:33:43.541601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:36.436 [2024-11-25 10:33:43.541610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:36.436 [2024-11-25 10:33:43.541619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:36.436 [2024-11-25 10:33:43.541628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:36.436 [2024-11-25 10:33:43.541638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.436 [2024-11-25 10:33:43.541648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:36.436 [2024-11-25 10:33:43.541659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:28:36.436 [2024-11-25 10:33:43.541668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.695 [2024-11-25 10:33:43.561525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.695 [2024-11-25 10:33:43.561561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:36.695 [2024-11-25 10:33:43.561587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.851 ms 00:28:36.695 [2024-11-25 10:33:43.561597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.695 [2024-11-25 10:33:43.562125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.695 [2024-11-25 10:33:43.562141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:36.695 [2024-11-25 10:33:43.562151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:28:36.695 [2024-11-25 10:33:43.562161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.695 [2024-11-25 10:33:43.613274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.695 [2024-11-25 10:33:43.613323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:36.695 [2024-11-25 10:33:43.613337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.695 [2024-11-25 10:33:43.613347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.695 [2024-11-25 10:33:43.613427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.695 [2024-11-25 10:33:43.613438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:36.695 [2024-11-25 10:33:43.613448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.695 [2024-11-25 10:33:43.613458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.695 [2024-11-25 10:33:43.613563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.695 [2024-11-25 10:33:43.613583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:36.695 [2024-11-25 10:33:43.613594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.695 [2024-11-25 10:33:43.613623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.695 [2024-11-25 10:33:43.613641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.695 [2024-11-25 10:33:43.613652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:36.695 [2024-11-25 10:33:43.613662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.695 [2024-11-25 10:33:43.613672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.695 [2024-11-25 10:33:43.737864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.695 [2024-11-25 10:33:43.737922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:36.695 [2024-11-25 10:33:43.737938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.695 [2024-11-25 10:33:43.737948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.955 [2024-11-25 10:33:43.837926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.955 [2024-11-25 10:33:43.837974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:36.955 [2024-11-25 10:33:43.837988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.955 [2024-11-25 10:33:43.837999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.955 [2024-11-25 10:33:43.838079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.955 [2024-11-25 10:33:43.838091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:36.955 [2024-11-25 10:33:43.838107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.955 [2024-11-25 10:33:43.838117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.955 [2024-11-25 10:33:43.838161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.955 [2024-11-25 10:33:43.838172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:36.955 [2024-11-25 10:33:43.838182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.955 [2024-11-25 10:33:43.838192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.955 [2024-11-25 10:33:43.838306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.955 [2024-11-25 10:33:43.838320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:36.955 [2024-11-25 10:33:43.838330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.955 [2024-11-25 10:33:43.838345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.955 [2024-11-25 10:33:43.838385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.955 [2024-11-25 10:33:43.838397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:36.955 [2024-11-25 10:33:43.838407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.955 [2024-11-25 10:33:43.838417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.955 [2024-11-25 10:33:43.838453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.955 [2024-11-25 10:33:43.838463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:36.955 [2024-11-25 10:33:43.838473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.955 [2024-11-25 10:33:43.838487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.955 [2024-11-25 10:33:43.838545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.955 [2024-11-25 10:33:43.838568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:36.955 [2024-11-25 10:33:43.838578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.955 [2024-11-25 10:33:43.838588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.955 [2024-11-25 10:33:43.838704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 657.198 ms, result 0 00:28:38.862 00:28:38.862 00:28:38.862 10:33:45 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:38.862 [2024-11-25 10:33:45.718911] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:28:38.862 [2024-11-25 10:33:45.719070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80446 ] 00:28:38.862 [2024-11-25 10:33:45.897561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.120 [2024-11-25 10:33:46.004592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.379 [2024-11-25 10:33:46.352330] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:39.379 [2024-11-25 10:33:46.352399] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:39.641 [2024-11-25 10:33:46.512879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.512939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:39.641 [2024-11-25 10:33:46.512954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:39.641 [2024-11-25 10:33:46.512965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.513012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.513027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:39.641 [2024-11-25 10:33:46.513038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:39.641 [2024-11-25 10:33:46.513047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.513069] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:39.641 [2024-11-25 10:33:46.514085] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:39.641 [2024-11-25 10:33:46.514113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.514124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:39.641 [2024-11-25 10:33:46.514134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:28:39.641 [2024-11-25 10:33:46.514144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.515565] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:39.641 [2024-11-25 10:33:46.534692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.534743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:39.641 [2024-11-25 10:33:46.534757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.158 ms 00:28:39.641 [2024-11-25 10:33:46.534767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.534829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.534842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:39.641 [2024-11-25 10:33:46.534852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:39.641 [2024-11-25 10:33:46.534861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.541527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.541556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:39.641 [2024-11-25 10:33:46.541569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.607 ms 00:28:39.641 [2024-11-25 10:33:46.541583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.541659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.541672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:39.641 [2024-11-25 10:33:46.541683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:39.641 [2024-11-25 10:33:46.541693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.541731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.541743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:39.641 [2024-11-25 10:33:46.541754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:39.641 [2024-11-25 10:33:46.541763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.541790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:39.641 [2024-11-25 10:33:46.546540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.546572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:39.641 [2024-11-25 10:33:46.546587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.763 ms 00:28:39.641 [2024-11-25 10:33:46.546613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.546642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.546653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:39.641 [2024-11-25 10:33:46.546663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:39.641 [2024-11-25 10:33:46.546673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.546725] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:39.641 [2024-11-25 10:33:46.546747] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:39.641 [2024-11-25 10:33:46.546782] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:39.641 [2024-11-25 10:33:46.546802] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:39.641 [2024-11-25 10:33:46.546889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:39.641 [2024-11-25 10:33:46.546902] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:39.641 [2024-11-25 10:33:46.546916] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:39.641 [2024-11-25 10:33:46.546929] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:39.641 [2024-11-25 10:33:46.546940] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:39.641 [2024-11-25 10:33:46.546952] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:39.641 [2024-11-25 10:33:46.546961] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:39.641 [2024-11-25 10:33:46.546971] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:39.641 [2024-11-25 10:33:46.546984] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:39.641 [2024-11-25 10:33:46.546994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.547005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:39.641 [2024-11-25 10:33:46.547015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:28:39.641 [2024-11-25 10:33:46.547024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.547096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.641 [2024-11-25 10:33:46.547107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:39.641 [2024-11-25 10:33:46.547117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:39.641 [2024-11-25 10:33:46.547126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.641 [2024-11-25 10:33:46.547222] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:39.641 [2024-11-25 10:33:46.547236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:39.641 [2024-11-25 10:33:46.547248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:39.641 [2024-11-25 10:33:46.547258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.641 [2024-11-25 10:33:46.547268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:39.641 [2024-11-25 10:33:46.547277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:39.641 [2024-11-25 10:33:46.547286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:39.641 [2024-11-25 10:33:46.547296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:39.641 [2024-11-25 10:33:46.547305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:39.641 [2024-11-25 10:33:46.547314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:39.641 [2024-11-25 10:33:46.547325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:39.641 [2024-11-25 10:33:46.547335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:39.641 [2024-11-25 10:33:46.547344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:39.641 [2024-11-25 10:33:46.547363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:39.641 [2024-11-25 10:33:46.547373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:39.641 [2024-11-25 10:33:46.547382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.641 [2024-11-25 10:33:46.547391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:39.642 [2024-11-25 10:33:46.547400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:39.642 [2024-11-25 10:33:46.547410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.642 [2024-11-25 10:33:46.547419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:39.642 [2024-11-25 10:33:46.547428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:39.642 [2024-11-25 10:33:46.547437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.642 [2024-11-25 10:33:46.547446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:39.642 [2024-11-25 10:33:46.547455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:39.642 [2024-11-25 10:33:46.547464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.642 [2024-11-25 10:33:46.547473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:39.642 [2024-11-25 10:33:46.547482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:39.642 [2024-11-25 10:33:46.547491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.642 [2024-11-25 10:33:46.547500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:39.642 [2024-11-25 10:33:46.547526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:39.642 [2024-11-25 10:33:46.547535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.642 [2024-11-25 10:33:46.547544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:39.642 [2024-11-25 10:33:46.547553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:39.642 [2024-11-25 10:33:46.547562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:39.642 [2024-11-25 10:33:46.547572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:39.642 [2024-11-25 10:33:46.547581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:39.642 [2024-11-25 10:33:46.547590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:39.642 [2024-11-25 10:33:46.547599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:39.642 [2024-11-25 10:33:46.547608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:39.642 [2024-11-25 10:33:46.547617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.642 [2024-11-25 10:33:46.547626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:39.642 [2024-11-25 10:33:46.547636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:39.642 [2024-11-25 10:33:46.547645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.642 [2024-11-25 10:33:46.547654] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:39.642 [2024-11-25 10:33:46.547664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:39.642 [2024-11-25 10:33:46.547673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:39.642 [2024-11-25 10:33:46.547683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.642 [2024-11-25 10:33:46.547693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:39.642 [2024-11-25 10:33:46.547703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:39.642 [2024-11-25 10:33:46.547712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:39.642 [2024-11-25 10:33:46.547722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:39.642 [2024-11-25 10:33:46.547730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:39.642 [2024-11-25 10:33:46.547739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:39.642 [2024-11-25 10:33:46.547750] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:39.642 [2024-11-25 10:33:46.547762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.642 [2024-11-25 10:33:46.547777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:39.642 [2024-11-25 10:33:46.547788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:39.642 [2024-11-25 10:33:46.547798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:39.642 [2024-11-25 10:33:46.547808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:39.642 [2024-11-25 10:33:46.547818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:39.642 [2024-11-25 10:33:46.547828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:39.642 [2024-11-25 10:33:46.547838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:39.642 [2024-11-25 10:33:46.547848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:39.642 [2024-11-25 10:33:46.547858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:39.642 [2024-11-25 10:33:46.547868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:39.642 [2024-11-25 10:33:46.547879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:39.642 [2024-11-25 10:33:46.547889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:39.642 [2024-11-25 10:33:46.547899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:39.642 [2024-11-25 10:33:46.547909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:39.642 [2024-11-25 10:33:46.547919] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:39.642 [2024-11-25 10:33:46.547930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.642 [2024-11-25 10:33:46.547941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:39.642 [2024-11-25 10:33:46.547951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:39.642 [2024-11-25 10:33:46.547962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:39.642 [2024-11-25 10:33:46.547972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:39.642 [2024-11-25 10:33:46.547983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.642 [2024-11-25 10:33:46.547993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:39.642 [2024-11-25 10:33:46.548003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:28:39.642 [2024-11-25 10:33:46.548013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.642 [2024-11-25 10:33:46.585856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.642 [2024-11-25 10:33:46.585893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:39.642 [2024-11-25 10:33:46.585907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.859 ms 00:28:39.642 [2024-11-25 10:33:46.585921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.642 [2024-11-25 10:33:46.585997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.642 [2024-11-25 10:33:46.586008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:39.642 [2024-11-25 10:33:46.586019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:39.642 [2024-11-25 10:33:46.586028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.642 [2024-11-25 10:33:46.638413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.642 [2024-11-25 10:33:46.638449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:39.642 [2024-11-25 10:33:46.638463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.413 ms 00:28:39.642 [2024-11-25 10:33:46.638474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.643 [2024-11-25 10:33:46.638532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.643 [2024-11-25 10:33:46.638560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:39.643 [2024-11-25 10:33:46.638575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:39.643 [2024-11-25 10:33:46.638585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.643 [2024-11-25 10:33:46.639067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.643 [2024-11-25 10:33:46.639086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:39.643 [2024-11-25 10:33:46.639097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:28:39.643 [2024-11-25 10:33:46.639107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.643 [2024-11-25 10:33:46.639219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.643 [2024-11-25 10:33:46.639233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:39.643 [2024-11-25 10:33:46.639249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:28:39.643 [2024-11-25 10:33:46.639259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.643 [2024-11-25 10:33:46.657864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.643 [2024-11-25 10:33:46.658015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:39.643 [2024-11-25 10:33:46.658042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.615 ms 00:28:39.643 [2024-11-25 10:33:46.658052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.643 [2024-11-25 10:33:46.676977] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:39.643 [2024-11-25 10:33:46.677015] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:39.643 [2024-11-25 10:33:46.677031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.643 [2024-11-25 10:33:46.677058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:39.643 [2024-11-25 10:33:46.677069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.908 ms 00:28:39.643 [2024-11-25 10:33:46.677079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.643 [2024-11-25 10:33:46.706380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.643 [2024-11-25 10:33:46.706557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:39.643 [2024-11-25 10:33:46.706579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.307 ms 00:28:39.643 [2024-11-25 10:33:46.706590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.643 [2024-11-25 10:33:46.724634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.643 [2024-11-25 10:33:46.724787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:39.643 [2024-11-25 10:33:46.724807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.974 ms 00:28:39.643 [2024-11-25 10:33:46.724818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.643 [2024-11-25 10:33:46.743157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.643 [2024-11-25 10:33:46.743211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:39.643 [2024-11-25 10:33:46.743224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.317 ms 00:28:39.643 [2024-11-25 10:33:46.743234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.643 [2024-11-25 10:33:46.744001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.643 [2024-11-25 10:33:46.744032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:39.643 [2024-11-25 10:33:46.744048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:28:39.643 [2024-11-25 10:33:46.744058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.902 [2024-11-25 10:33:46.828859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.903 [2024-11-25 10:33:46.828926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:39.903 [2024-11-25 10:33:46.828948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.915 ms 00:28:39.903 [2024-11-25 10:33:46.828959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.903 [2024-11-25 10:33:46.840130] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:39.903 [2024-11-25 10:33:46.843418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.903 [2024-11-25 10:33:46.843454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:39.903 [2024-11-25 10:33:46.843469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.430 ms 00:28:39.903 [2024-11-25 10:33:46.843480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.903 [2024-11-25 10:33:46.843590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.903 [2024-11-25 10:33:46.843604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:39.903 [2024-11-25 10:33:46.843616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:39.903 [2024-11-25 10:33:46.843629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.903 [2024-11-25 10:33:46.845051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.903 [2024-11-25 10:33:46.845088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:39.903 [2024-11-25 10:33:46.845100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:28:39.903 [2024-11-25 10:33:46.845110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.903 [2024-11-25 10:33:46.845147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.903 [2024-11-25 10:33:46.845158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:39.903 [2024-11-25 10:33:46.845169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:39.903 [2024-11-25 10:33:46.845179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.903 [2024-11-25 10:33:46.845217] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:39.903 [2024-11-25 10:33:46.845229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.903 [2024-11-25 10:33:46.845239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:39.903 [2024-11-25 10:33:46.845249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:39.903 [2024-11-25 10:33:46.845259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.903 [2024-11-25 10:33:46.881646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.903 [2024-11-25 10:33:46.881804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:39.903 [2024-11-25 10:33:46.881826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.426 ms 00:28:39.903 [2024-11-25 10:33:46.881843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.903 [2024-11-25 10:33:46.881935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.903 [2024-11-25 10:33:46.881949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:39.903 [2024-11-25 10:33:46.881961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:39.903 [2024-11-25 10:33:46.881972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.903 [2024-11-25 10:33:46.883012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.309 ms, result 0 00:28:41.284  [2024-11-25T10:33:49.344Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-25T10:33:50.286Z] Copying: 48/1024 [MB] (26 MBps) [2024-11-25T10:33:51.223Z] Copying: 75/1024 [MB] (26 MBps) [2024-11-25T10:33:52.159Z] Copying: 101/1024 [MB] (26 MBps) [2024-11-25T10:33:53.538Z] Copying: 127/1024 [MB] (26 MBps) [2024-11-25T10:33:54.105Z] Copying: 153/1024 [MB] (26 MBps) [2024-11-25T10:33:55.483Z] Copying: 179/1024 [MB] (25 MBps) [2024-11-25T10:33:56.420Z] Copying: 205/1024 [MB] (25 MBps) [2024-11-25T10:33:57.355Z] Copying: 231/1024 [MB] (26 MBps) [2024-11-25T10:33:58.293Z] Copying: 257/1024 [MB] (25 MBps) [2024-11-25T10:33:59.230Z] Copying: 283/1024 [MB] (26 MBps) [2024-11-25T10:34:00.166Z] Copying: 310/1024 [MB] (26 MBps) [2024-11-25T10:34:01.103Z] Copying: 337/1024 [MB] (26 MBps) [2024-11-25T10:34:02.478Z] Copying: 364/1024 [MB] (26 MBps) [2024-11-25T10:34:03.414Z] Copying: 391/1024 [MB] (26 MBps) [2024-11-25T10:34:04.360Z] Copying: 417/1024 [MB] (25 MBps) [2024-11-25T10:34:05.296Z] Copying: 442/1024 [MB] (25 MBps) [2024-11-25T10:34:06.233Z] Copying: 468/1024 [MB] (25 MBps) [2024-11-25T10:34:07.169Z] Copying: 493/1024 [MB] (25 MBps) [2024-11-25T10:34:08.104Z] Copying: 519/1024 [MB] (26 MBps) [2024-11-25T10:34:09.482Z] Copying: 544/1024 [MB] (25 MBps) [2024-11-25T10:34:10.419Z] Copying: 571/1024 [MB] (26 MBps) [2024-11-25T10:34:11.357Z] Copying: 597/1024 [MB] (26 MBps) [2024-11-25T10:34:12.295Z] Copying: 624/1024 [MB] (26 MBps) [2024-11-25T10:34:13.233Z] Copying: 650/1024 [MB] (26 MBps) [2024-11-25T10:34:14.171Z] Copying: 676/1024 [MB] (25 MBps) [2024-11-25T10:34:15.146Z] Copying: 702/1024 [MB] (26 MBps) [2024-11-25T10:34:16.084Z] Copying: 729/1024 [MB] (26 MBps) [2024-11-25T10:34:17.463Z] Copying: 756/1024 [MB] (26 MBps) [2024-11-25T10:34:18.399Z] Copying: 783/1024 [MB] (26 MBps) [2024-11-25T10:34:19.340Z] Copying: 810/1024 [MB] (27 MBps) [2024-11-25T10:34:20.277Z] Copying: 837/1024 [MB] (26 MBps) [2024-11-25T10:34:21.213Z] Copying: 863/1024 [MB] (26 MBps) [2024-11-25T10:34:22.150Z] Copying: 890/1024 [MB] (26 MBps) [2024-11-25T10:34:23.085Z] Copying: 916/1024 [MB] (26 MBps) [2024-11-25T10:34:24.463Z] Copying: 942/1024 [MB] (26 MBps) [2024-11-25T10:34:25.401Z] Copying: 968/1024 [MB] (26 MBps) [2024-11-25T10:34:26.338Z] Copying: 995/1024 [MB] (26 MBps) [2024-11-25T10:34:26.338Z] Copying: 1023/1024 [MB] (27 MBps) [2024-11-25T10:34:26.597Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-25 10:34:26.498422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.485 [2024-11-25 10:34:26.498518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:19.485 [2024-11-25 10:34:26.498537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:19.485 [2024-11-25 10:34:26.498555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.485 [2024-11-25 10:34:26.498583] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:19.485 [2024-11-25 10:34:26.504061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.485 [2024-11-25 10:34:26.504103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:19.485 [2024-11-25 10:34:26.504119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.466 ms 00:29:19.485 [2024-11-25 10:34:26.504130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.485 [2024-11-25 10:34:26.504353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.485 [2024-11-25 10:34:26.504367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:19.485 [2024-11-25 10:34:26.504586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:29:19.485 [2024-11-25 10:34:26.504604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.485 [2024-11-25 10:34:26.509612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.485 [2024-11-25 10:34:26.509654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:19.485 [2024-11-25 10:34:26.509668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.995 ms 00:29:19.485 [2024-11-25 10:34:26.509680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.485 [2024-11-25 10:34:26.515486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.485 [2024-11-25 10:34:26.515529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:19.485 [2024-11-25 10:34:26.515541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.774 ms 00:29:19.485 [2024-11-25 10:34:26.515551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.485 [2024-11-25 10:34:26.553236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.485 [2024-11-25 10:34:26.553277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:19.485 [2024-11-25 10:34:26.553291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.525 ms 00:29:19.485 [2024-11-25 10:34:26.553301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.485 [2024-11-25 10:34:26.573159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.485 [2024-11-25 10:34:26.573218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:19.485 [2024-11-25 10:34:26.573233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.851 ms 00:29:19.485 [2024-11-25 10:34:26.573243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.744 [2024-11-25 10:34:26.711412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.744 [2024-11-25 10:34:26.711611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:19.744 [2024-11-25 10:34:26.711634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 138.351 ms 00:29:19.744 [2024-11-25 10:34:26.711646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.744 [2024-11-25 10:34:26.748819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.744 [2024-11-25 10:34:26.748857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:19.744 [2024-11-25 10:34:26.748871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.209 ms 00:29:19.744 [2024-11-25 10:34:26.748882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.744 [2024-11-25 10:34:26.784827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.745 [2024-11-25 10:34:26.784866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:19.745 [2024-11-25 10:34:26.784879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.963 ms 00:29:19.745 [2024-11-25 10:34:26.784905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.745 [2024-11-25 10:34:26.820321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.745 [2024-11-25 10:34:26.820358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:19.745 [2024-11-25 10:34:26.820371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.434 ms 00:29:19.745 [2024-11-25 10:34:26.820397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.004 [2024-11-25 10:34:26.856377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.004 [2024-11-25 10:34:26.856414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:20.004 [2024-11-25 10:34:26.856426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.963 ms 00:29:20.005 [2024-11-25 10:34:26.856452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.005 [2024-11-25 10:34:26.856488] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:20.005 [2024-11-25 10:34:26.856519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:20.005 [2024-11-25 10:34:26.856532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.856996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:20.005 [2024-11-25 10:34:26.857428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:20.006 [2024-11-25 10:34:26.857592] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:20.006 [2024-11-25 10:34:26.857602] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f430ca0c-e16a-40b5-83da-f68ac69b1b9c 00:29:20.006 [2024-11-25 10:34:26.857613] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:20.006 [2024-11-25 10:34:26.857623] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 31424 00:29:20.006 [2024-11-25 10:34:26.857632] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 30464 00:29:20.006 [2024-11-25 10:34:26.857643] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0315 00:29:20.006 [2024-11-25 10:34:26.857653] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:20.006 [2024-11-25 10:34:26.857678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:20.006 [2024-11-25 10:34:26.857688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:20.006 [2024-11-25 10:34:26.857698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:20.006 [2024-11-25 10:34:26.857706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:20.006 [2024-11-25 10:34:26.857716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.006 [2024-11-25 10:34:26.857726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:20.006 [2024-11-25 10:34:26.857737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:29:20.006 [2024-11-25 10:34:26.857746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.006 [2024-11-25 10:34:26.877536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.006 [2024-11-25 10:34:26.877570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:20.006 [2024-11-25 10:34:26.877583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.788 ms 00:29:20.006 [2024-11-25 10:34:26.877600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.006 [2024-11-25 10:34:26.878148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.006 [2024-11-25 10:34:26.878160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:20.006 [2024-11-25 10:34:26.878170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:29:20.006 [2024-11-25 10:34:26.878179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.006 [2024-11-25 10:34:26.929865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.006 [2024-11-25 10:34:26.929907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:20.006 [2024-11-25 10:34:26.929920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.006 [2024-11-25 10:34:26.929946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.006 [2024-11-25 10:34:26.929999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.006 [2024-11-25 10:34:26.930010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:20.006 [2024-11-25 10:34:26.930020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.006 [2024-11-25 10:34:26.930030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.006 [2024-11-25 10:34:26.930111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.006 [2024-11-25 10:34:26.930125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:20.006 [2024-11-25 10:34:26.930141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.006 [2024-11-25 10:34:26.930150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.006 [2024-11-25 10:34:26.930168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.006 [2024-11-25 10:34:26.930178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:20.006 [2024-11-25 10:34:26.930197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.006 [2024-11-25 10:34:26.930207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.006 [2024-11-25 10:34:27.053640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.006 [2024-11-25 10:34:27.053872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:20.006 [2024-11-25 10:34:27.053897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.006 [2024-11-25 10:34:27.053908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.265 [2024-11-25 10:34:27.156135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.266 [2024-11-25 10:34:27.156184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:20.266 [2024-11-25 10:34:27.156198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.266 [2024-11-25 10:34:27.156209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.266 [2024-11-25 10:34:27.156297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.266 [2024-11-25 10:34:27.156309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:20.266 [2024-11-25 10:34:27.156319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.266 [2024-11-25 10:34:27.156334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.266 [2024-11-25 10:34:27.156379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.266 [2024-11-25 10:34:27.156390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:20.266 [2024-11-25 10:34:27.156400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.266 [2024-11-25 10:34:27.156410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.266 [2024-11-25 10:34:27.156527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.266 [2024-11-25 10:34:27.156558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:20.266 [2024-11-25 10:34:27.156569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.266 [2024-11-25 10:34:27.156579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.266 [2024-11-25 10:34:27.156619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.266 [2024-11-25 10:34:27.156632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:20.266 [2024-11-25 10:34:27.156641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.266 [2024-11-25 10:34:27.156651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.266 [2024-11-25 10:34:27.156688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.266 [2024-11-25 10:34:27.156700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:20.266 [2024-11-25 10:34:27.156710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.266 [2024-11-25 10:34:27.156719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.266 [2024-11-25 10:34:27.156765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:20.266 [2024-11-25 10:34:27.156781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:20.266 [2024-11-25 10:34:27.156792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:20.266 [2024-11-25 10:34:27.156802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.266 [2024-11-25 10:34:27.156918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 659.533 ms, result 0 00:29:21.202 00:29:21.202 00:29:21.202 10:34:28 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:23.128 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:23.128 10:34:29 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:23.128 10:34:29 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:29:23.128 10:34:29 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:23.128 10:34:30 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:23.128 10:34:30 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:23.128 Process with pid 78949 is not found 00:29:23.128 10:34:30 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 78949 00:29:23.128 10:34:30 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78949 ']' 00:29:23.128 10:34:30 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78949 00:29:23.128 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78949) - No such process 00:29:23.128 10:34:30 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 78949 is not found' 00:29:23.128 10:34:30 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:29:23.128 Remove shared memory files 00:29:23.128 10:34:30 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:23.128 10:34:30 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:29:23.128 10:34:30 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:29:23.128 10:34:30 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:29:23.128 10:34:30 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:23.128 10:34:30 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:29:23.128 ************************************ 00:29:23.128 END TEST ftl_restore 00:29:23.128 ************************************ 00:29:23.128 00:29:23.128 real 3m9.556s 00:29:23.128 user 2m57.483s 00:29:23.128 sys 0m13.756s 00:29:23.128 10:34:30 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.128 10:34:30 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:29:23.128 10:34:30 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:23.128 10:34:30 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:23.128 10:34:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.128 10:34:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:23.128 ************************************ 00:29:23.128 START TEST ftl_dirty_shutdown 00:29:23.128 ************************************ 00:29:23.128 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:23.388 * Looking for test storage... 00:29:23.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:23.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.388 --rc genhtml_branch_coverage=1 00:29:23.388 --rc genhtml_function_coverage=1 00:29:23.388 --rc genhtml_legend=1 00:29:23.388 --rc geninfo_all_blocks=1 00:29:23.388 --rc geninfo_unexecuted_blocks=1 00:29:23.388 00:29:23.388 ' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:23.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.388 --rc genhtml_branch_coverage=1 00:29:23.388 --rc genhtml_function_coverage=1 00:29:23.388 --rc genhtml_legend=1 00:29:23.388 --rc geninfo_all_blocks=1 00:29:23.388 --rc geninfo_unexecuted_blocks=1 00:29:23.388 00:29:23.388 ' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:23.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.388 --rc genhtml_branch_coverage=1 00:29:23.388 --rc genhtml_function_coverage=1 00:29:23.388 --rc genhtml_legend=1 00:29:23.388 --rc geninfo_all_blocks=1 00:29:23.388 --rc geninfo_unexecuted_blocks=1 00:29:23.388 00:29:23.388 ' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:23.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.388 --rc genhtml_branch_coverage=1 00:29:23.388 --rc genhtml_function_coverage=1 00:29:23.388 --rc genhtml_legend=1 00:29:23.388 --rc geninfo_all_blocks=1 00:29:23.388 --rc geninfo_unexecuted_blocks=1 00:29:23.388 00:29:23.388 ' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:23.388 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80966 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80966 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80966 ']' 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.389 10:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:23.648 [2024-11-25 10:34:30.531662] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:29:23.648 [2024-11-25 10:34:30.531911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80966 ] 00:29:23.648 [2024-11-25 10:34:30.715608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.907 [2024-11-25 10:34:30.826888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.845 10:34:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.845 10:34:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:24.845 10:34:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:24.845 10:34:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:29:24.845 10:34:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:24.845 10:34:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:29:24.845 10:34:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:24.845 10:34:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:25.104 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:25.104 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:25.104 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:25.104 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:25.104 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:25.104 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:25.104 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:25.104 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:25.363 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:25.363 { 00:29:25.363 "name": "nvme0n1", 00:29:25.363 "aliases": [ 00:29:25.363 "019c3fd3-b604-4eb7-9bd4-ff4944e40093" 00:29:25.363 ], 00:29:25.363 "product_name": "NVMe disk", 00:29:25.363 "block_size": 4096, 00:29:25.363 "num_blocks": 1310720, 00:29:25.363 "uuid": "019c3fd3-b604-4eb7-9bd4-ff4944e40093", 00:29:25.364 "numa_id": -1, 00:29:25.364 "assigned_rate_limits": { 00:29:25.364 "rw_ios_per_sec": 0, 00:29:25.364 "rw_mbytes_per_sec": 0, 00:29:25.364 "r_mbytes_per_sec": 0, 00:29:25.364 "w_mbytes_per_sec": 0 00:29:25.364 }, 00:29:25.364 "claimed": true, 00:29:25.364 "claim_type": "read_many_write_one", 00:29:25.364 "zoned": false, 00:29:25.364 "supported_io_types": { 00:29:25.364 "read": true, 00:29:25.364 "write": true, 00:29:25.364 "unmap": true, 00:29:25.364 "flush": true, 00:29:25.364 "reset": true, 00:29:25.364 "nvme_admin": true, 00:29:25.364 "nvme_io": true, 00:29:25.364 "nvme_io_md": false, 00:29:25.364 "write_zeroes": true, 00:29:25.364 "zcopy": false, 00:29:25.364 "get_zone_info": false, 00:29:25.364 "zone_management": false, 00:29:25.364 "zone_append": false, 00:29:25.364 "compare": true, 00:29:25.364 "compare_and_write": false, 00:29:25.364 "abort": true, 00:29:25.364 "seek_hole": false, 00:29:25.364 "seek_data": false, 00:29:25.364 "copy": true, 00:29:25.364 "nvme_iov_md": false 00:29:25.364 }, 00:29:25.364 "driver_specific": { 00:29:25.364 "nvme": [ 00:29:25.364 { 00:29:25.364 "pci_address": "0000:00:11.0", 00:29:25.364 "trid": { 00:29:25.364 "trtype": "PCIe", 00:29:25.364 "traddr": "0000:00:11.0" 00:29:25.364 }, 00:29:25.364 "ctrlr_data": { 00:29:25.364 "cntlid": 0, 00:29:25.364 "vendor_id": "0x1b36", 00:29:25.364 "model_number": "QEMU NVMe Ctrl", 00:29:25.364 "serial_number": "12341", 00:29:25.364 "firmware_revision": "8.0.0", 00:29:25.364 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:25.364 "oacs": { 00:29:25.364 "security": 0, 00:29:25.364 "format": 1, 00:29:25.364 "firmware": 0, 00:29:25.364 "ns_manage": 1 00:29:25.364 }, 00:29:25.364 "multi_ctrlr": false, 00:29:25.364 "ana_reporting": false 00:29:25.364 }, 00:29:25.364 "vs": { 00:29:25.364 "nvme_version": "1.4" 00:29:25.364 }, 00:29:25.364 "ns_data": { 00:29:25.364 "id": 1, 00:29:25.364 "can_share": false 00:29:25.364 } 00:29:25.364 } 00:29:25.364 ], 00:29:25.364 "mp_policy": "active_passive" 00:29:25.364 } 00:29:25.364 } 00:29:25.364 ]' 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:25.364 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:25.623 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=f3964d6d-da85-436a-92b9-43e8bf7b701c 00:29:25.623 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:25.623 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3964d6d-da85-436a-92b9-43e8bf7b701c 00:29:25.882 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:25.882 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=690c8a9e-712b-4ff1-89dc-77d483573e88 00:29:25.882 10:34:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 690c8a9e-712b-4ff1-89dc-77d483573e88 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:26.141 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:26.400 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:26.400 { 00:29:26.400 "name": "a16d1432-bfd4-4a6a-92c4-34a6d7ae7539", 00:29:26.400 "aliases": [ 00:29:26.400 "lvs/nvme0n1p0" 00:29:26.400 ], 00:29:26.400 "product_name": "Logical Volume", 00:29:26.400 "block_size": 4096, 00:29:26.400 "num_blocks": 26476544, 00:29:26.400 "uuid": "a16d1432-bfd4-4a6a-92c4-34a6d7ae7539", 00:29:26.400 "assigned_rate_limits": { 00:29:26.400 "rw_ios_per_sec": 0, 00:29:26.400 "rw_mbytes_per_sec": 0, 00:29:26.400 "r_mbytes_per_sec": 0, 00:29:26.400 "w_mbytes_per_sec": 0 00:29:26.400 }, 00:29:26.400 "claimed": false, 00:29:26.400 "zoned": false, 00:29:26.400 "supported_io_types": { 00:29:26.400 "read": true, 00:29:26.400 "write": true, 00:29:26.400 "unmap": true, 00:29:26.400 "flush": false, 00:29:26.400 "reset": true, 00:29:26.400 "nvme_admin": false, 00:29:26.400 "nvme_io": false, 00:29:26.400 "nvme_io_md": false, 00:29:26.400 "write_zeroes": true, 00:29:26.400 "zcopy": false, 00:29:26.400 "get_zone_info": false, 00:29:26.400 "zone_management": false, 00:29:26.400 "zone_append": false, 00:29:26.400 "compare": false, 00:29:26.400 "compare_and_write": false, 00:29:26.400 "abort": false, 00:29:26.400 "seek_hole": true, 00:29:26.400 "seek_data": true, 00:29:26.400 "copy": false, 00:29:26.400 "nvme_iov_md": false 00:29:26.400 }, 00:29:26.400 "driver_specific": { 00:29:26.400 "lvol": { 00:29:26.400 "lvol_store_uuid": "690c8a9e-712b-4ff1-89dc-77d483573e88", 00:29:26.400 "base_bdev": "nvme0n1", 00:29:26.400 "thin_provision": true, 00:29:26.400 "num_allocated_clusters": 0, 00:29:26.400 "snapshot": false, 00:29:26.400 "clone": false, 00:29:26.400 "esnap_clone": false 00:29:26.400 } 00:29:26.400 } 00:29:26.400 } 00:29:26.400 ]' 00:29:26.400 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:26.401 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:26.401 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:26.401 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:26.401 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:26.401 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:26.401 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:29:26.401 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:26.401 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:26.660 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:26.660 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:26.660 10:34:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:26.660 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:26.660 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:26.660 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:26.660 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:26.660 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:26.920 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:26.920 { 00:29:26.920 "name": "a16d1432-bfd4-4a6a-92c4-34a6d7ae7539", 00:29:26.920 "aliases": [ 00:29:26.920 "lvs/nvme0n1p0" 00:29:26.920 ], 00:29:26.920 "product_name": "Logical Volume", 00:29:26.920 "block_size": 4096, 00:29:26.920 "num_blocks": 26476544, 00:29:26.920 "uuid": "a16d1432-bfd4-4a6a-92c4-34a6d7ae7539", 00:29:26.920 "assigned_rate_limits": { 00:29:26.920 "rw_ios_per_sec": 0, 00:29:26.920 "rw_mbytes_per_sec": 0, 00:29:26.920 "r_mbytes_per_sec": 0, 00:29:26.920 "w_mbytes_per_sec": 0 00:29:26.920 }, 00:29:26.920 "claimed": false, 00:29:26.920 "zoned": false, 00:29:26.920 "supported_io_types": { 00:29:26.920 "read": true, 00:29:26.920 "write": true, 00:29:26.920 "unmap": true, 00:29:26.920 "flush": false, 00:29:26.920 "reset": true, 00:29:26.920 "nvme_admin": false, 00:29:26.920 "nvme_io": false, 00:29:26.920 "nvme_io_md": false, 00:29:26.920 "write_zeroes": true, 00:29:26.920 "zcopy": false, 00:29:26.920 "get_zone_info": false, 00:29:26.920 "zone_management": false, 00:29:26.920 "zone_append": false, 00:29:26.920 "compare": false, 00:29:26.920 "compare_and_write": false, 00:29:26.920 "abort": false, 00:29:26.920 "seek_hole": true, 00:29:26.920 "seek_data": true, 00:29:26.920 "copy": false, 00:29:26.920 "nvme_iov_md": false 00:29:26.920 }, 00:29:26.920 "driver_specific": { 00:29:26.920 "lvol": { 00:29:26.920 "lvol_store_uuid": "690c8a9e-712b-4ff1-89dc-77d483573e88", 00:29:26.920 "base_bdev": "nvme0n1", 00:29:26.920 "thin_provision": true, 00:29:26.920 "num_allocated_clusters": 0, 00:29:26.920 "snapshot": false, 00:29:26.920 "clone": false, 00:29:26.920 "esnap_clone": false 00:29:26.920 } 00:29:26.920 } 00:29:26.920 } 00:29:26.920 ]' 00:29:26.920 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:26.920 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:26.920 10:34:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:26.920 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:26.920 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:26.920 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:26.920 10:34:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:29:26.920 10:34:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:27.178 10:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:29:27.178 10:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:27.178 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:27.178 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:27.178 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:27.178 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:27.178 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 00:29:27.437 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:27.437 { 00:29:27.437 "name": "a16d1432-bfd4-4a6a-92c4-34a6d7ae7539", 00:29:27.437 "aliases": [ 00:29:27.437 "lvs/nvme0n1p0" 00:29:27.437 ], 00:29:27.437 "product_name": "Logical Volume", 00:29:27.437 "block_size": 4096, 00:29:27.437 "num_blocks": 26476544, 00:29:27.437 "uuid": "a16d1432-bfd4-4a6a-92c4-34a6d7ae7539", 00:29:27.437 "assigned_rate_limits": { 00:29:27.437 "rw_ios_per_sec": 0, 00:29:27.437 "rw_mbytes_per_sec": 0, 00:29:27.437 "r_mbytes_per_sec": 0, 00:29:27.437 "w_mbytes_per_sec": 0 00:29:27.437 }, 00:29:27.437 "claimed": false, 00:29:27.437 "zoned": false, 00:29:27.437 "supported_io_types": { 00:29:27.437 "read": true, 00:29:27.437 "write": true, 00:29:27.437 "unmap": true, 00:29:27.437 "flush": false, 00:29:27.437 "reset": true, 00:29:27.437 "nvme_admin": false, 00:29:27.437 "nvme_io": false, 00:29:27.437 "nvme_io_md": false, 00:29:27.437 "write_zeroes": true, 00:29:27.437 "zcopy": false, 00:29:27.437 "get_zone_info": false, 00:29:27.437 "zone_management": false, 00:29:27.437 "zone_append": false, 00:29:27.437 "compare": false, 00:29:27.437 "compare_and_write": false, 00:29:27.437 "abort": false, 00:29:27.437 "seek_hole": true, 00:29:27.437 "seek_data": true, 00:29:27.437 "copy": false, 00:29:27.437 "nvme_iov_md": false 00:29:27.437 }, 00:29:27.437 "driver_specific": { 00:29:27.437 "lvol": { 00:29:27.437 "lvol_store_uuid": "690c8a9e-712b-4ff1-89dc-77d483573e88", 00:29:27.437 "base_bdev": "nvme0n1", 00:29:27.437 "thin_provision": true, 00:29:27.437 "num_allocated_clusters": 0, 00:29:27.437 "snapshot": false, 00:29:27.438 "clone": false, 00:29:27.438 "esnap_clone": false 00:29:27.438 } 00:29:27.438 } 00:29:27.438 } 00:29:27.438 ]' 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 --l2p_dram_limit 10' 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:27.438 10:34:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a16d1432-bfd4-4a6a-92c4-34a6d7ae7539 --l2p_dram_limit 10 -c nvc0n1p0 00:29:27.697 [2024-11-25 10:34:34.719128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.697 [2024-11-25 10:34:34.719342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:27.697 [2024-11-25 10:34:34.719375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:27.697 [2024-11-25 10:34:34.719386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.697 [2024-11-25 10:34:34.719484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.697 [2024-11-25 10:34:34.719513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:27.697 [2024-11-25 10:34:34.719529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:27.697 [2024-11-25 10:34:34.719540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.697 [2024-11-25 10:34:34.719574] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:27.697 [2024-11-25 10:34:34.720595] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:27.697 [2024-11-25 10:34:34.720622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.697 [2024-11-25 10:34:34.720634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:27.697 [2024-11-25 10:34:34.720647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:29:27.697 [2024-11-25 10:34:34.720657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.697 [2024-11-25 10:34:34.720736] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID da427e63-7cad-4dac-b19a-5c4ed8c3c31c 00:29:27.698 [2024-11-25 10:34:34.722133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.698 [2024-11-25 10:34:34.722163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:27.698 [2024-11-25 10:34:34.722175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:27.698 [2024-11-25 10:34:34.722190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.698 [2024-11-25 10:34:34.729718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.698 [2024-11-25 10:34:34.729875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:27.698 [2024-11-25 10:34:34.730018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.496 ms 00:29:27.698 [2024-11-25 10:34:34.730060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.698 [2024-11-25 10:34:34.730188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.698 [2024-11-25 10:34:34.730207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:27.698 [2024-11-25 10:34:34.730218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:29:27.698 [2024-11-25 10:34:34.730236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.698 [2024-11-25 10:34:34.730301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.698 [2024-11-25 10:34:34.730316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:27.698 [2024-11-25 10:34:34.730331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:27.698 [2024-11-25 10:34:34.730344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.698 [2024-11-25 10:34:34.730369] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:27.698 [2024-11-25 10:34:34.735875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.698 [2024-11-25 10:34:34.735908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:27.698 [2024-11-25 10:34:34.735924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.519 ms 00:29:27.698 [2024-11-25 10:34:34.735935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.698 [2024-11-25 10:34:34.735970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.698 [2024-11-25 10:34:34.735981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:27.698 [2024-11-25 10:34:34.735994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:27.698 [2024-11-25 10:34:34.736004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.698 [2024-11-25 10:34:34.736041] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:27.698 [2024-11-25 10:34:34.736168] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:27.698 [2024-11-25 10:34:34.736188] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:27.698 [2024-11-25 10:34:34.736202] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:27.698 [2024-11-25 10:34:34.736217] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:27.698 [2024-11-25 10:34:34.736229] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:27.698 [2024-11-25 10:34:34.736243] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:27.698 [2024-11-25 10:34:34.736255] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:27.698 [2024-11-25 10:34:34.736268] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:27.698 [2024-11-25 10:34:34.736278] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:27.698 [2024-11-25 10:34:34.736291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.698 [2024-11-25 10:34:34.736311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:27.698 [2024-11-25 10:34:34.736325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:29:27.698 [2024-11-25 10:34:34.736335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.698 [2024-11-25 10:34:34.736411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.698 [2024-11-25 10:34:34.736422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:27.698 [2024-11-25 10:34:34.736435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:27.698 [2024-11-25 10:34:34.736448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.698 [2024-11-25 10:34:34.736560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:27.698 [2024-11-25 10:34:34.736575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:27.698 [2024-11-25 10:34:34.736599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:27.698 [2024-11-25 10:34:34.736609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.698 [2024-11-25 10:34:34.736622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:27.698 [2024-11-25 10:34:34.736631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:27.698 [2024-11-25 10:34:34.736643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:27.698 [2024-11-25 10:34:34.736653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:27.698 [2024-11-25 10:34:34.736665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:27.698 [2024-11-25 10:34:34.736674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:27.698 [2024-11-25 10:34:34.736687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:27.698 [2024-11-25 10:34:34.736697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:27.698 [2024-11-25 10:34:34.736709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:27.698 [2024-11-25 10:34:34.736720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:27.698 [2024-11-25 10:34:34.736735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:27.698 [2024-11-25 10:34:34.736744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.698 [2024-11-25 10:34:34.736758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:27.698 [2024-11-25 10:34:34.736768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:27.698 [2024-11-25 10:34:34.736781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.698 [2024-11-25 10:34:34.736791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:27.698 [2024-11-25 10:34:34.736803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:27.698 [2024-11-25 10:34:34.736812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:27.698 [2024-11-25 10:34:34.736824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:27.698 [2024-11-25 10:34:34.736833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:27.698 [2024-11-25 10:34:34.736844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:27.698 [2024-11-25 10:34:34.736854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:27.698 [2024-11-25 10:34:34.736865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:27.698 [2024-11-25 10:34:34.736875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:27.698 [2024-11-25 10:34:34.736886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:27.698 [2024-11-25 10:34:34.736895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:27.698 [2024-11-25 10:34:34.736907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:27.698 [2024-11-25 10:34:34.736916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:27.698 [2024-11-25 10:34:34.736930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:27.698 [2024-11-25 10:34:34.736939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:27.698 [2024-11-25 10:34:34.736951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:27.698 [2024-11-25 10:34:34.736960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:27.698 [2024-11-25 10:34:34.736971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:27.698 [2024-11-25 10:34:34.736980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:27.698 [2024-11-25 10:34:34.736992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:27.698 [2024-11-25 10:34:34.737001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.698 [2024-11-25 10:34:34.737012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:27.698 [2024-11-25 10:34:34.737021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:27.698 [2024-11-25 10:34:34.737033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.698 [2024-11-25 10:34:34.737042] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:27.698 [2024-11-25 10:34:34.737055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:27.698 [2024-11-25 10:34:34.737065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:27.698 [2024-11-25 10:34:34.737078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.698 [2024-11-25 10:34:34.737091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:27.698 [2024-11-25 10:34:34.737106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:27.698 [2024-11-25 10:34:34.737115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:27.698 [2024-11-25 10:34:34.737127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:27.698 [2024-11-25 10:34:34.737136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:27.698 [2024-11-25 10:34:34.737148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:27.698 [2024-11-25 10:34:34.737162] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:27.698 [2024-11-25 10:34:34.737177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:27.698 [2024-11-25 10:34:34.737189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:27.698 [2024-11-25 10:34:34.737201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:27.698 [2024-11-25 10:34:34.737212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:27.698 [2024-11-25 10:34:34.737224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:27.698 [2024-11-25 10:34:34.737235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:27.699 [2024-11-25 10:34:34.737247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:27.699 [2024-11-25 10:34:34.737257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:27.699 [2024-11-25 10:34:34.737271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:27.699 [2024-11-25 10:34:34.737281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:27.699 [2024-11-25 10:34:34.737296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:27.699 [2024-11-25 10:34:34.737307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:27.699 [2024-11-25 10:34:34.737319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:27.699 [2024-11-25 10:34:34.737329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:27.699 [2024-11-25 10:34:34.737343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:27.699 [2024-11-25 10:34:34.737353] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:27.699 [2024-11-25 10:34:34.737375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:27.699 [2024-11-25 10:34:34.737386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:27.699 [2024-11-25 10:34:34.737399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:27.699 [2024-11-25 10:34:34.737409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:27.699 [2024-11-25 10:34:34.737422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:27.699 [2024-11-25 10:34:34.737433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.699 [2024-11-25 10:34:34.737446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:27.699 [2024-11-25 10:34:34.737456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:29:27.699 [2024-11-25 10:34:34.737469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.699 [2024-11-25 10:34:34.737520] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:27.699 [2024-11-25 10:34:34.737539] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:31.918 [2024-11-25 10:34:38.311940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.312008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:31.918 [2024-11-25 10:34:38.312024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3580.218 ms 00:29:31.918 [2024-11-25 10:34:38.312037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.352090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.352143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:31.918 [2024-11-25 10:34:38.352159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.781 ms 00:29:31.918 [2024-11-25 10:34:38.352188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.352318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.352335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:31.918 [2024-11-25 10:34:38.352350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:31.918 [2024-11-25 10:34:38.352366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.399195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.399241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:31.918 [2024-11-25 10:34:38.399255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.862 ms 00:29:31.918 [2024-11-25 10:34:38.399270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.399310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.399324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:31.918 [2024-11-25 10:34:38.399335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:31.918 [2024-11-25 10:34:38.399358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.399860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.399879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:31.918 [2024-11-25 10:34:38.399891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:29:31.918 [2024-11-25 10:34:38.399903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.400004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.400021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:31.918 [2024-11-25 10:34:38.400031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:31.918 [2024-11-25 10:34:38.400047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.420698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.420739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:31.918 [2024-11-25 10:34:38.420753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.663 ms 00:29:31.918 [2024-11-25 10:34:38.420766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.433257] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:31.918 [2024-11-25 10:34:38.436440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.436468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:31.918 [2024-11-25 10:34:38.436484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.608 ms 00:29:31.918 [2024-11-25 10:34:38.436501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.530436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.530508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:31.918 [2024-11-25 10:34:38.530543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.048 ms 00:29:31.918 [2024-11-25 10:34:38.530554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.530740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.530753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:31.918 [2024-11-25 10:34:38.530770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:29:31.918 [2024-11-25 10:34:38.530780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.567487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.567530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:31.918 [2024-11-25 10:34:38.567548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.708 ms 00:29:31.918 [2024-11-25 10:34:38.567561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.602914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.602949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:31.918 [2024-11-25 10:34:38.602966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.364 ms 00:29:31.918 [2024-11-25 10:34:38.602976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.603692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.603712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:31.918 [2024-11-25 10:34:38.603730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:29:31.918 [2024-11-25 10:34:38.603740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.703710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.703753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:31.918 [2024-11-25 10:34:38.703773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.053 ms 00:29:31.918 [2024-11-25 10:34:38.703784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.740811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.741002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:31.918 [2024-11-25 10:34:38.741028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.988 ms 00:29:31.918 [2024-11-25 10:34:38.741039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.777817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.777864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:31.918 [2024-11-25 10:34:38.777881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.792 ms 00:29:31.918 [2024-11-25 10:34:38.777892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.814957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.815111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:31.918 [2024-11-25 10:34:38.815137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.079 ms 00:29:31.918 [2024-11-25 10:34:38.815148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.815244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.815256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:31.918 [2024-11-25 10:34:38.815272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:31.918 [2024-11-25 10:34:38.815283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.815399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.918 [2024-11-25 10:34:38.815412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:31.918 [2024-11-25 10:34:38.815425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:31.918 [2024-11-25 10:34:38.815435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.918 [2024-11-25 10:34:38.816465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4103.577 ms, result 0 00:29:31.918 { 00:29:31.918 "name": "ftl0", 00:29:31.919 "uuid": "da427e63-7cad-4dac-b19a-5c4ed8c3c31c" 00:29:31.919 } 00:29:31.919 10:34:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:29:31.919 10:34:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:29:32.178 /dev/nbd0 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:29:32.178 1+0 records in 00:29:32.178 1+0 records out 00:29:32.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467748 s, 8.8 MB/s 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:32.178 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:29:32.438 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:32.438 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:32.438 10:34:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:29:32.438 10:34:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:29:32.438 [2024-11-25 10:34:39.380197] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:29:32.438 [2024-11-25 10:34:39.380328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81109 ] 00:29:32.698 [2024-11-25 10:34:39.561921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.698 [2024-11-25 10:34:39.675210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.074  [2024-11-25T10:34:42.121Z] Copying: 199/1024 [MB] (199 MBps) [2024-11-25T10:34:43.057Z] Copying: 398/1024 [MB] (199 MBps) [2024-11-25T10:34:44.434Z] Copying: 598/1024 [MB] (199 MBps) [2024-11-25T10:34:45.371Z] Copying: 794/1024 [MB] (195 MBps) [2024-11-25T10:34:45.371Z] Copying: 980/1024 [MB] (186 MBps) [2024-11-25T10:34:46.749Z] Copying: 1024/1024 [MB] (average 196 MBps) 00:29:39.637 00:29:39.637 10:34:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:41.013 10:34:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:29:41.272 [2024-11-25 10:34:48.178541] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:29:41.272 [2024-11-25 10:34:48.178852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81206 ] 00:29:41.272 [2024-11-25 10:34:48.364267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.530 [2024-11-25 10:34:48.485144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.907  [2024-11-25T10:34:50.984Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-25T10:34:51.920Z] Copying: 36/1024 [MB] (18 MBps) [2024-11-25T10:34:52.857Z] Copying: 54/1024 [MB] (18 MBps) [2024-11-25T10:34:54.234Z] Copying: 73/1024 [MB] (18 MBps) [2024-11-25T10:34:54.804Z] Copying: 91/1024 [MB] (18 MBps) [2024-11-25T10:34:56.185Z] Copying: 109/1024 [MB] (17 MBps) [2024-11-25T10:34:57.126Z] Copying: 126/1024 [MB] (17 MBps) [2024-11-25T10:34:58.065Z] Copying: 146/1024 [MB] (19 MBps) [2024-11-25T10:34:59.004Z] Copying: 164/1024 [MB] (18 MBps) [2024-11-25T10:34:59.943Z] Copying: 182/1024 [MB] (18 MBps) [2024-11-25T10:35:00.896Z] Copying: 200/1024 [MB] (18 MBps) [2024-11-25T10:35:01.833Z] Copying: 218/1024 [MB] (17 MBps) [2024-11-25T10:35:03.212Z] Copying: 236/1024 [MB] (17 MBps) [2024-11-25T10:35:03.781Z] Copying: 253/1024 [MB] (17 MBps) [2024-11-25T10:35:05.160Z] Copying: 271/1024 [MB] (17 MBps) [2024-11-25T10:35:06.097Z] Copying: 288/1024 [MB] (17 MBps) [2024-11-25T10:35:07.037Z] Copying: 305/1024 [MB] (17 MBps) [2024-11-25T10:35:08.013Z] Copying: 323/1024 [MB] (17 MBps) [2024-11-25T10:35:08.950Z] Copying: 341/1024 [MB] (17 MBps) [2024-11-25T10:35:09.885Z] Copying: 358/1024 [MB] (17 MBps) [2024-11-25T10:35:10.822Z] Copying: 376/1024 [MB] (17 MBps) [2024-11-25T10:35:12.201Z] Copying: 394/1024 [MB] (17 MBps) [2024-11-25T10:35:12.771Z] Copying: 412/1024 [MB] (17 MBps) [2024-11-25T10:35:14.151Z] Copying: 429/1024 [MB] (17 MBps) [2024-11-25T10:35:15.089Z] Copying: 447/1024 [MB] (17 MBps) [2024-11-25T10:35:16.026Z] Copying: 464/1024 [MB] (17 MBps) [2024-11-25T10:35:16.963Z] Copying: 482/1024 [MB] (17 MBps) [2024-11-25T10:35:17.900Z] Copying: 500/1024 [MB] (17 MBps) [2024-11-25T10:35:18.856Z] Copying: 517/1024 [MB] (17 MBps) [2024-11-25T10:35:19.792Z] Copying: 535/1024 [MB] (17 MBps) [2024-11-25T10:35:21.168Z] Copying: 552/1024 [MB] (17 MBps) [2024-11-25T10:35:22.105Z] Copying: 570/1024 [MB] (17 MBps) [2024-11-25T10:35:23.041Z] Copying: 588/1024 [MB] (17 MBps) [2024-11-25T10:35:23.978Z] Copying: 605/1024 [MB] (17 MBps) [2024-11-25T10:35:24.943Z] Copying: 623/1024 [MB] (17 MBps) [2024-11-25T10:35:25.883Z] Copying: 640/1024 [MB] (17 MBps) [2024-11-25T10:35:26.821Z] Copying: 658/1024 [MB] (17 MBps) [2024-11-25T10:35:27.757Z] Copying: 676/1024 [MB] (17 MBps) [2024-11-25T10:35:29.135Z] Copying: 693/1024 [MB] (17 MBps) [2024-11-25T10:35:30.077Z] Copying: 710/1024 [MB] (16 MBps) [2024-11-25T10:35:31.014Z] Copying: 727/1024 [MB] (16 MBps) [2024-11-25T10:35:31.952Z] Copying: 744/1024 [MB] (17 MBps) [2024-11-25T10:35:32.888Z] Copying: 762/1024 [MB] (17 MBps) [2024-11-25T10:35:33.824Z] Copying: 779/1024 [MB] (17 MBps) [2024-11-25T10:35:34.762Z] Copying: 796/1024 [MB] (17 MBps) [2024-11-25T10:35:36.141Z] Copying: 813/1024 [MB] (16 MBps) [2024-11-25T10:35:37.078Z] Copying: 831/1024 [MB] (17 MBps) [2024-11-25T10:35:38.014Z] Copying: 848/1024 [MB] (17 MBps) [2024-11-25T10:35:38.950Z] Copying: 866/1024 [MB] (17 MBps) [2024-11-25T10:35:39.887Z] Copying: 883/1024 [MB] (17 MBps) [2024-11-25T10:35:40.823Z] Copying: 900/1024 [MB] (17 MBps) [2024-11-25T10:35:41.759Z] Copying: 918/1024 [MB] (18 MBps) [2024-11-25T10:35:43.232Z] Copying: 936/1024 [MB] (17 MBps) [2024-11-25T10:35:43.800Z] Copying: 953/1024 [MB] (17 MBps) [2024-11-25T10:35:44.734Z] Copying: 971/1024 [MB] (17 MBps) [2024-11-25T10:35:46.113Z] Copying: 989/1024 [MB] (17 MBps) [2024-11-25T10:35:47.053Z] Copying: 1007/1024 [MB] (18 MBps) [2024-11-25T10:35:47.992Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:30:40.880 00:30:40.880 10:35:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:30:40.880 10:35:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:30:41.140 10:35:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:41.400 [2024-11-25 10:35:48.263237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.263302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:41.400 [2024-11-25 10:35:48.263323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:41.400 [2024-11-25 10:35:48.263345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.263371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:41.400 [2024-11-25 10:35:48.267707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.267743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:41.400 [2024-11-25 10:35:48.267759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.321 ms 00:30:41.400 [2024-11-25 10:35:48.267770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.269890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.269931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:41.400 [2024-11-25 10:35:48.269947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:30:41.400 [2024-11-25 10:35:48.269959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.287385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.287426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:41.400 [2024-11-25 10:35:48.287443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.426 ms 00:30:41.400 [2024-11-25 10:35:48.287469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.292576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.292615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:41.400 [2024-11-25 10:35:48.292630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:30:41.400 [2024-11-25 10:35:48.292640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.329649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.329691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:41.400 [2024-11-25 10:35:48.329707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.992 ms 00:30:41.400 [2024-11-25 10:35:48.329733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.351836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.351876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:41.400 [2024-11-25 10:35:48.351897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.090 ms 00:30:41.400 [2024-11-25 10:35:48.351907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.352053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.352069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:41.400 [2024-11-25 10:35:48.352090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:30:41.400 [2024-11-25 10:35:48.352100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.388908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.388946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:41.400 [2024-11-25 10:35:48.388963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.841 ms 00:30:41.400 [2024-11-25 10:35:48.388972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.425199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.425249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:41.400 [2024-11-25 10:35:48.425267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.242 ms 00:30:41.400 [2024-11-25 10:35:48.425293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.460811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.460849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:41.400 [2024-11-25 10:35:48.460864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.528 ms 00:30:41.400 [2024-11-25 10:35:48.460874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.496461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.400 [2024-11-25 10:35:48.496505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:41.400 [2024-11-25 10:35:48.496521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.547 ms 00:30:41.400 [2024-11-25 10:35:48.496530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.400 [2024-11-25 10:35:48.496572] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:41.401 [2024-11-25 10:35:48.496588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.496997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:41.401 [2024-11-25 10:35:48.497478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:41.402 [2024-11-25 10:35:48.497848] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:41.402 [2024-11-25 10:35:48.497860] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da427e63-7cad-4dac-b19a-5c4ed8c3c31c 00:30:41.402 [2024-11-25 10:35:48.497872] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:41.402 [2024-11-25 10:35:48.497886] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:41.402 [2024-11-25 10:35:48.497899] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:41.402 [2024-11-25 10:35:48.497911] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:41.402 [2024-11-25 10:35:48.497921] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:41.402 [2024-11-25 10:35:48.497934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:41.402 [2024-11-25 10:35:48.497944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:41.402 [2024-11-25 10:35:48.497955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:41.402 [2024-11-25 10:35:48.497964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:41.402 [2024-11-25 10:35:48.497976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.402 [2024-11-25 10:35:48.497986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:41.402 [2024-11-25 10:35:48.497999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:30:41.402 [2024-11-25 10:35:48.498008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.662 [2024-11-25 10:35:48.518067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.662 [2024-11-25 10:35:48.518103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:41.662 [2024-11-25 10:35:48.518119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.038 ms 00:30:41.662 [2024-11-25 10:35:48.518145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.662 [2024-11-25 10:35:48.518708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.662 [2024-11-25 10:35:48.518721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:41.662 [2024-11-25 10:35:48.518735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:30:41.662 [2024-11-25 10:35:48.518745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.662 [2024-11-25 10:35:48.583880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.662 [2024-11-25 10:35:48.583923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:41.662 [2024-11-25 10:35:48.583938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.662 [2024-11-25 10:35:48.583966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.662 [2024-11-25 10:35:48.584025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.662 [2024-11-25 10:35:48.584036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:41.662 [2024-11-25 10:35:48.584050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.662 [2024-11-25 10:35:48.584061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.662 [2024-11-25 10:35:48.584158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.662 [2024-11-25 10:35:48.584173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:41.662 [2024-11-25 10:35:48.584186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.662 [2024-11-25 10:35:48.584196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.662 [2024-11-25 10:35:48.584221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.662 [2024-11-25 10:35:48.584232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:41.662 [2024-11-25 10:35:48.584244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.662 [2024-11-25 10:35:48.584255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.662 [2024-11-25 10:35:48.709138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.662 [2024-11-25 10:35:48.709194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:41.662 [2024-11-25 10:35:48.709212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.662 [2024-11-25 10:35:48.709223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.923 [2024-11-25 10:35:48.810426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.923 [2024-11-25 10:35:48.810487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:41.923 [2024-11-25 10:35:48.810520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.923 [2024-11-25 10:35:48.810531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.923 [2024-11-25 10:35:48.810663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.923 [2024-11-25 10:35:48.810677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:41.923 [2024-11-25 10:35:48.810694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.923 [2024-11-25 10:35:48.810704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.923 [2024-11-25 10:35:48.810766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.923 [2024-11-25 10:35:48.810778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:41.923 [2024-11-25 10:35:48.810791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.923 [2024-11-25 10:35:48.810801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.923 [2024-11-25 10:35:48.810921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.923 [2024-11-25 10:35:48.810934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:41.923 [2024-11-25 10:35:48.810950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.923 [2024-11-25 10:35:48.810968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.923 [2024-11-25 10:35:48.811011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.923 [2024-11-25 10:35:48.811023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:41.923 [2024-11-25 10:35:48.811037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.923 [2024-11-25 10:35:48.811047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.923 [2024-11-25 10:35:48.811088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.923 [2024-11-25 10:35:48.811099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:41.923 [2024-11-25 10:35:48.811111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.923 [2024-11-25 10:35:48.811124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.923 [2024-11-25 10:35:48.811174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:41.923 [2024-11-25 10:35:48.811185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:41.923 [2024-11-25 10:35:48.811198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:41.923 [2024-11-25 10:35:48.811208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.923 [2024-11-25 10:35:48.811352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.963 ms, result 0 00:30:41.923 true 00:30:41.923 10:35:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80966 00:30:41.923 10:35:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80966 00:30:41.923 10:35:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:30:41.923 [2024-11-25 10:35:48.939306] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:30:41.923 [2024-11-25 10:35:48.939439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81822 ] 00:30:42.183 [2024-11-25 10:35:49.126405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.183 [2024-11-25 10:35:49.237654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.610  [2024-11-25T10:35:51.659Z] Copying: 199/1024 [MB] (199 MBps) [2024-11-25T10:35:52.597Z] Copying: 397/1024 [MB] (197 MBps) [2024-11-25T10:35:53.975Z] Copying: 599/1024 [MB] (202 MBps) [2024-11-25T10:35:54.911Z] Copying: 800/1024 [MB] (200 MBps) [2024-11-25T10:35:54.911Z] Copying: 998/1024 [MB] (197 MBps) [2024-11-25T10:35:55.848Z] Copying: 1024/1024 [MB] (average 199 MBps) 00:30:48.736 00:30:48.736 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80966 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:30:48.736 10:35:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:48.995 [2024-11-25 10:35:55.920448] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:30:48.995 [2024-11-25 10:35:55.920574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81891 ] 00:30:48.995 [2024-11-25 10:35:56.102461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.253 [2024-11-25 10:35:56.210709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.511 [2024-11-25 10:35:56.579743] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:49.511 [2024-11-25 10:35:56.579802] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:49.770 [2024-11-25 10:35:56.645853] blobstore.c:4908:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:49.770 [2024-11-25 10:35:56.646168] blobstore.c:4855:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:49.770 [2024-11-25 10:35:56.646441] blobstore.c:4855:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:50.030 [2024-11-25 10:35:56.959041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.959083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:50.030 [2024-11-25 10:35:56.959098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:50.030 [2024-11-25 10:35:56.959112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.959159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.959171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:50.030 [2024-11-25 10:35:56.959182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:50.030 [2024-11-25 10:35:56.959192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.959212] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:50.030 [2024-11-25 10:35:56.960275] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:50.030 [2024-11-25 10:35:56.960298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.960309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:50.030 [2024-11-25 10:35:56.960320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:30:50.030 [2024-11-25 10:35:56.960330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.961967] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:50.030 [2024-11-25 10:35:56.981258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.981294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:50.030 [2024-11-25 10:35:56.981308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.323 ms 00:30:50.030 [2024-11-25 10:35:56.981319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.981387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.981399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:50.030 [2024-11-25 10:35:56.981426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:50.030 [2024-11-25 10:35:56.981437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.988098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.988124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:50.030 [2024-11-25 10:35:56.988136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.578 ms 00:30:50.030 [2024-11-25 10:35:56.988146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.988225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.988239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:50.030 [2024-11-25 10:35:56.988251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:50.030 [2024-11-25 10:35:56.988261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.988303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.988316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:50.030 [2024-11-25 10:35:56.988327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:50.030 [2024-11-25 10:35:56.988339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.988364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:50.030 [2024-11-25 10:35:56.993192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.993222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:50.030 [2024-11-25 10:35:56.993234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.842 ms 00:30:50.030 [2024-11-25 10:35:56.993245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.993277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.993288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:50.030 [2024-11-25 10:35:56.993299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:50.030 [2024-11-25 10:35:56.993309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.993365] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:50.030 [2024-11-25 10:35:56.993397] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:50.030 [2024-11-25 10:35:56.993432] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:50.030 [2024-11-25 10:35:56.993451] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:50.030 [2024-11-25 10:35:56.993550] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:50.030 [2024-11-25 10:35:56.993565] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:50.030 [2024-11-25 10:35:56.993579] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:50.030 [2024-11-25 10:35:56.993595] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:50.030 [2024-11-25 10:35:56.993608] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:50.030 [2024-11-25 10:35:56.993620] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:50.030 [2024-11-25 10:35:56.993631] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:50.030 [2024-11-25 10:35:56.993642] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:50.030 [2024-11-25 10:35:56.993652] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:50.030 [2024-11-25 10:35:56.993664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.993674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:50.030 [2024-11-25 10:35:56.993684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:30:50.030 [2024-11-25 10:35:56.993695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.993768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.030 [2024-11-25 10:35:56.993783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:50.030 [2024-11-25 10:35:56.993794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:50.030 [2024-11-25 10:35:56.993804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.030 [2024-11-25 10:35:56.993900] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:50.030 [2024-11-25 10:35:56.993915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:50.030 [2024-11-25 10:35:56.993926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:50.030 [2024-11-25 10:35:56.993937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.030 [2024-11-25 10:35:56.993947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:50.030 [2024-11-25 10:35:56.993958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:50.030 [2024-11-25 10:35:56.993969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:50.030 [2024-11-25 10:35:56.993980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:50.030 [2024-11-25 10:35:56.993990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:50.030 [2024-11-25 10:35:56.994011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:50.030 [2024-11-25 10:35:56.994021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:50.030 [2024-11-25 10:35:56.994030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:50.030 [2024-11-25 10:35:56.994040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:50.030 [2024-11-25 10:35:56.994051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:50.030 [2024-11-25 10:35:56.994060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:50.030 [2024-11-25 10:35:56.994069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.030 [2024-11-25 10:35:56.994079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:50.030 [2024-11-25 10:35:56.994088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:50.030 [2024-11-25 10:35:56.994097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.030 [2024-11-25 10:35:56.994106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:50.030 [2024-11-25 10:35:56.994115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:50.030 [2024-11-25 10:35:56.994124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:50.030 [2024-11-25 10:35:56.994133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:50.030 [2024-11-25 10:35:56.994143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:50.030 [2024-11-25 10:35:56.994153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:50.030 [2024-11-25 10:35:56.994162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:50.031 [2024-11-25 10:35:56.994171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:50.031 [2024-11-25 10:35:56.994180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:50.031 [2024-11-25 10:35:56.994188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:50.031 [2024-11-25 10:35:56.994197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:50.031 [2024-11-25 10:35:56.994206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:50.031 [2024-11-25 10:35:56.994215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:50.031 [2024-11-25 10:35:56.994224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:50.031 [2024-11-25 10:35:56.994232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:50.031 [2024-11-25 10:35:56.994241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:50.031 [2024-11-25 10:35:56.994251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:50.031 [2024-11-25 10:35:56.994261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:50.031 [2024-11-25 10:35:56.994270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:50.031 [2024-11-25 10:35:56.994280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:50.031 [2024-11-25 10:35:56.994289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.031 [2024-11-25 10:35:56.994298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:50.031 [2024-11-25 10:35:56.994308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:50.031 [2024-11-25 10:35:56.994318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.031 [2024-11-25 10:35:56.994327] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:50.031 [2024-11-25 10:35:56.994337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:50.031 [2024-11-25 10:35:56.994351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:50.031 [2024-11-25 10:35:56.994362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:50.031 [2024-11-25 10:35:56.994373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:50.031 [2024-11-25 10:35:56.994383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:50.031 [2024-11-25 10:35:56.994392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:50.031 [2024-11-25 10:35:56.994401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:50.031 [2024-11-25 10:35:56.994410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:50.031 [2024-11-25 10:35:56.994420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:50.031 [2024-11-25 10:35:56.994430] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:50.031 [2024-11-25 10:35:56.994442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:50.031 [2024-11-25 10:35:56.994453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:50.031 [2024-11-25 10:35:56.994463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:50.031 [2024-11-25 10:35:56.994474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:50.031 [2024-11-25 10:35:56.994485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:50.031 [2024-11-25 10:35:56.994507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:50.031 [2024-11-25 10:35:56.994517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:50.031 [2024-11-25 10:35:56.994527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:50.031 [2024-11-25 10:35:56.994538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:50.031 [2024-11-25 10:35:56.994549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:50.031 [2024-11-25 10:35:56.994559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:50.031 [2024-11-25 10:35:56.994570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:50.031 [2024-11-25 10:35:56.994581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:50.031 [2024-11-25 10:35:56.994592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:50.031 [2024-11-25 10:35:56.994603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:50.031 [2024-11-25 10:35:56.994613] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:50.031 [2024-11-25 10:35:56.994624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:50.031 [2024-11-25 10:35:56.994636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:50.031 [2024-11-25 10:35:56.994648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:50.031 [2024-11-25 10:35:56.994659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:50.031 [2024-11-25 10:35:56.994670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:50.031 [2024-11-25 10:35:56.994681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.031 [2024-11-25 10:35:56.994692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:50.031 [2024-11-25 10:35:56.994702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:30:50.031 [2024-11-25 10:35:56.994712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.031 [2024-11-25 10:35:57.034441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.031 [2024-11-25 10:35:57.034479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:50.031 [2024-11-25 10:35:57.034514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.743 ms 00:30:50.031 [2024-11-25 10:35:57.034526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.031 [2024-11-25 10:35:57.034614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.031 [2024-11-25 10:35:57.034625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:50.031 [2024-11-25 10:35:57.034637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:50.031 [2024-11-25 10:35:57.034646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.031 [2024-11-25 10:35:57.090019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.031 [2024-11-25 10:35:57.090063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:50.031 [2024-11-25 10:35:57.090080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.400 ms 00:30:50.031 [2024-11-25 10:35:57.090092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.031 [2024-11-25 10:35:57.090142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.031 [2024-11-25 10:35:57.090154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:50.031 [2024-11-25 10:35:57.090166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:50.031 [2024-11-25 10:35:57.090176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.031 [2024-11-25 10:35:57.090688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.031 [2024-11-25 10:35:57.090705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:50.031 [2024-11-25 10:35:57.090717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:30:50.031 [2024-11-25 10:35:57.090732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.031 [2024-11-25 10:35:57.090856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.031 [2024-11-25 10:35:57.090872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:50.031 [2024-11-25 10:35:57.090884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:30:50.031 [2024-11-25 10:35:57.090895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.031 [2024-11-25 10:35:57.108697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.031 [2024-11-25 10:35:57.108736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:50.031 [2024-11-25 10:35:57.108750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.810 ms 00:30:50.031 [2024-11-25 10:35:57.108761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.031 [2024-11-25 10:35:57.127691] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:50.031 [2024-11-25 10:35:57.127730] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:50.031 [2024-11-25 10:35:57.127747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.031 [2024-11-25 10:35:57.127759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:50.031 [2024-11-25 10:35:57.127771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.893 ms 00:30:50.031 [2024-11-25 10:35:57.127781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.158026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.158114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:50.290 [2024-11-25 10:35:57.158132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.244 ms 00:30:50.290 [2024-11-25 10:35:57.158143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.178345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.178410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:50.290 [2024-11-25 10:35:57.178428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.175 ms 00:30:50.290 [2024-11-25 10:35:57.178439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.198230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.198290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:50.290 [2024-11-25 10:35:57.198306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.712 ms 00:30:50.290 [2024-11-25 10:35:57.198317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.199167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.199196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:50.290 [2024-11-25 10:35:57.199210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:30:50.290 [2024-11-25 10:35:57.199221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.287691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.287751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:50.290 [2024-11-25 10:35:57.287768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.587 ms 00:30:50.290 [2024-11-25 10:35:57.287780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.299413] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:50.290 [2024-11-25 10:35:57.302610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.302644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:50.290 [2024-11-25 10:35:57.302659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.772 ms 00:30:50.290 [2024-11-25 10:35:57.302675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.302787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.302803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:50.290 [2024-11-25 10:35:57.302815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:50.290 [2024-11-25 10:35:57.302827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.302919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.302933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:50.290 [2024-11-25 10:35:57.302944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:50.290 [2024-11-25 10:35:57.302955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.302984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.302995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:50.290 [2024-11-25 10:35:57.303006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:50.290 [2024-11-25 10:35:57.303016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.303048] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:50.290 [2024-11-25 10:35:57.303060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.303071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:50.290 [2024-11-25 10:35:57.303081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:50.290 [2024-11-25 10:35:57.303096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.340433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.340477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:50.290 [2024-11-25 10:35:57.340499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.373 ms 00:30:50.290 [2024-11-25 10:35:57.340511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.340599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.290 [2024-11-25 10:35:57.340612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:50.290 [2024-11-25 10:35:57.340624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:50.290 [2024-11-25 10:35:57.340635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.290 [2024-11-25 10:35:57.341954] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.087 ms, result 0 00:30:51.277  [2024-11-25T10:35:59.776Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-25T10:36:00.713Z] Copying: 48/1024 [MB] (24 MBps) [2024-11-25T10:36:01.646Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-25T10:36:02.581Z] Copying: 99/1024 [MB] (25 MBps) [2024-11-25T10:36:03.516Z] Copying: 124/1024 [MB] (25 MBps) [2024-11-25T10:36:04.458Z] Copying: 149/1024 [MB] (24 MBps) [2024-11-25T10:36:05.395Z] Copying: 173/1024 [MB] (23 MBps) [2024-11-25T10:36:06.773Z] Copying: 197/1024 [MB] (23 MBps) [2024-11-25T10:36:07.340Z] Copying: 222/1024 [MB] (25 MBps) [2024-11-25T10:36:08.720Z] Copying: 246/1024 [MB] (24 MBps) [2024-11-25T10:36:09.657Z] Copying: 270/1024 [MB] (23 MBps) [2024-11-25T10:36:10.592Z] Copying: 294/1024 [MB] (24 MBps) [2024-11-25T10:36:11.528Z] Copying: 320/1024 [MB] (25 MBps) [2024-11-25T10:36:12.465Z] Copying: 346/1024 [MB] (26 MBps) [2024-11-25T10:36:13.402Z] Copying: 372/1024 [MB] (25 MBps) [2024-11-25T10:36:14.340Z] Copying: 397/1024 [MB] (24 MBps) [2024-11-25T10:36:15.724Z] Copying: 421/1024 [MB] (24 MBps) [2024-11-25T10:36:16.661Z] Copying: 446/1024 [MB] (25 MBps) [2024-11-25T10:36:17.638Z] Copying: 471/1024 [MB] (24 MBps) [2024-11-25T10:36:18.573Z] Copying: 495/1024 [MB] (24 MBps) [2024-11-25T10:36:19.508Z] Copying: 520/1024 [MB] (24 MBps) [2024-11-25T10:36:20.445Z] Copying: 545/1024 [MB] (25 MBps) [2024-11-25T10:36:21.381Z] Copying: 571/1024 [MB] (25 MBps) [2024-11-25T10:36:22.318Z] Copying: 597/1024 [MB] (25 MBps) [2024-11-25T10:36:23.693Z] Copying: 622/1024 [MB] (24 MBps) [2024-11-25T10:36:24.628Z] Copying: 646/1024 [MB] (24 MBps) [2024-11-25T10:36:25.579Z] Copying: 670/1024 [MB] (24 MBps) [2024-11-25T10:36:26.527Z] Copying: 695/1024 [MB] (24 MBps) [2024-11-25T10:36:27.462Z] Copying: 718/1024 [MB] (23 MBps) [2024-11-25T10:36:28.395Z] Copying: 743/1024 [MB] (24 MBps) [2024-11-25T10:36:29.329Z] Copying: 766/1024 [MB] (23 MBps) [2024-11-25T10:36:30.703Z] Copying: 790/1024 [MB] (23 MBps) [2024-11-25T10:36:31.638Z] Copying: 813/1024 [MB] (23 MBps) [2024-11-25T10:36:32.572Z] Copying: 836/1024 [MB] (22 MBps) [2024-11-25T10:36:33.506Z] Copying: 859/1024 [MB] (23 MBps) [2024-11-25T10:36:34.441Z] Copying: 883/1024 [MB] (23 MBps) [2024-11-25T10:36:35.375Z] Copying: 906/1024 [MB] (23 MBps) [2024-11-25T10:36:36.312Z] Copying: 930/1024 [MB] (24 MBps) [2024-11-25T10:36:37.686Z] Copying: 954/1024 [MB] (23 MBps) [2024-11-25T10:36:38.621Z] Copying: 977/1024 [MB] (23 MBps) [2024-11-25T10:36:39.557Z] Copying: 1000/1024 [MB] (22 MBps) [2024-11-25T10:36:40.123Z] Copying: 1023/1024 [MB] (22 MBps) [2024-11-25T10:36:40.123Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-25 10:36:40.047660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.011 [2024-11-25 10:36:40.047894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:33.011 [2024-11-25 10:36:40.047920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:33.011 [2024-11-25 10:36:40.047932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.011 [2024-11-25 10:36:40.050009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:33.011 [2024-11-25 10:36:40.054686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.011 [2024-11-25 10:36:40.054731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:33.012 [2024-11-25 10:36:40.054745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:31:33.012 [2024-11-25 10:36:40.054766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.012 [2024-11-25 10:36:40.063110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.012 [2024-11-25 10:36:40.063153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:33.012 [2024-11-25 10:36:40.063168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.506 ms 00:31:33.012 [2024-11-25 10:36:40.063179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.012 [2024-11-25 10:36:40.086917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.012 [2024-11-25 10:36:40.086965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:33.012 [2024-11-25 10:36:40.086979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.759 ms 00:31:33.012 [2024-11-25 10:36:40.086991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.012 [2024-11-25 10:36:40.091983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.012 [2024-11-25 10:36:40.092031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:33.012 [2024-11-25 10:36:40.092045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.968 ms 00:31:33.012 [2024-11-25 10:36:40.092056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.271 [2024-11-25 10:36:40.128858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.271 [2024-11-25 10:36:40.128907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:33.271 [2024-11-25 10:36:40.128921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.816 ms 00:31:33.271 [2024-11-25 10:36:40.128931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.271 [2024-11-25 10:36:40.149727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.271 [2024-11-25 10:36:40.149776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:33.271 [2024-11-25 10:36:40.149791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.788 ms 00:31:33.271 [2024-11-25 10:36:40.149802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.271 [2024-11-25 10:36:40.270785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.271 [2024-11-25 10:36:40.270874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:33.271 [2024-11-25 10:36:40.270903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.128 ms 00:31:33.271 [2024-11-25 10:36:40.270914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.271 [2024-11-25 10:36:40.308003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.271 [2024-11-25 10:36:40.308062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:33.271 [2024-11-25 10:36:40.308078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.128 ms 00:31:33.271 [2024-11-25 10:36:40.308103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.271 [2024-11-25 10:36:40.344191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.271 [2024-11-25 10:36:40.344240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:33.271 [2024-11-25 10:36:40.344254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.102 ms 00:31:33.271 [2024-11-25 10:36:40.344264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.271 [2024-11-25 10:36:40.379534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.271 [2024-11-25 10:36:40.379582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:33.271 [2024-11-25 10:36:40.379596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.283 ms 00:31:33.271 [2024-11-25 10:36:40.379607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.530 [2024-11-25 10:36:40.415207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.530 [2024-11-25 10:36:40.415259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:33.530 [2024-11-25 10:36:40.415274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.572 ms 00:31:33.530 [2024-11-25 10:36:40.415284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.530 [2024-11-25 10:36:40.415325] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:33.530 [2024-11-25 10:36:40.415342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106496 / 261120 wr_cnt: 1 state: open 00:31:33.530 [2024-11-25 10:36:40.415355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:33.530 [2024-11-25 10:36:40.415552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.415998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:33.531 [2024-11-25 10:36:40.416267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:33.532 [2024-11-25 10:36:40.416436] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:33.532 [2024-11-25 10:36:40.416446] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da427e63-7cad-4dac-b19a-5c4ed8c3c31c 00:31:33.532 [2024-11-25 10:36:40.416475] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106496 00:31:33.532 [2024-11-25 10:36:40.416485] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107456 00:31:33.532 [2024-11-25 10:36:40.416503] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106496 00:31:33.532 [2024-11-25 10:36:40.416514] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:31:33.532 [2024-11-25 10:36:40.416524] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:33.532 [2024-11-25 10:36:40.416534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:33.532 [2024-11-25 10:36:40.416545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:33.532 [2024-11-25 10:36:40.416554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:33.532 [2024-11-25 10:36:40.416563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:33.532 [2024-11-25 10:36:40.416573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.532 [2024-11-25 10:36:40.416583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:33.532 [2024-11-25 10:36:40.416593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:31:33.532 [2024-11-25 10:36:40.416603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.532 [2024-11-25 10:36:40.436530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.532 [2024-11-25 10:36:40.436577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:33.532 [2024-11-25 10:36:40.436591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.920 ms 00:31:33.532 [2024-11-25 10:36:40.436602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.532 [2024-11-25 10:36:40.437130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:33.532 [2024-11-25 10:36:40.437142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:33.532 [2024-11-25 10:36:40.437153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:31:33.532 [2024-11-25 10:36:40.437167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.532 [2024-11-25 10:36:40.489928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.532 [2024-11-25 10:36:40.489982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:33.532 [2024-11-25 10:36:40.489996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.532 [2024-11-25 10:36:40.490008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.532 [2024-11-25 10:36:40.490072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.532 [2024-11-25 10:36:40.490083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:33.532 [2024-11-25 10:36:40.490094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.532 [2024-11-25 10:36:40.490111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.532 [2024-11-25 10:36:40.490204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.532 [2024-11-25 10:36:40.490218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:33.532 [2024-11-25 10:36:40.490229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.532 [2024-11-25 10:36:40.490239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.532 [2024-11-25 10:36:40.490255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.532 [2024-11-25 10:36:40.490266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:33.532 [2024-11-25 10:36:40.490277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.532 [2024-11-25 10:36:40.490287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.532 [2024-11-25 10:36:40.612831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.532 [2024-11-25 10:36:40.612897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:33.532 [2024-11-25 10:36:40.612912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.532 [2024-11-25 10:36:40.612923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.791 [2024-11-25 10:36:40.713158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.791 [2024-11-25 10:36:40.713222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:33.791 [2024-11-25 10:36:40.713237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.791 [2024-11-25 10:36:40.713255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.791 [2024-11-25 10:36:40.713348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.791 [2024-11-25 10:36:40.713361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:33.791 [2024-11-25 10:36:40.713372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.791 [2024-11-25 10:36:40.713382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.791 [2024-11-25 10:36:40.713439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.791 [2024-11-25 10:36:40.713451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:33.791 [2024-11-25 10:36:40.713461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.791 [2024-11-25 10:36:40.713472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.791 [2024-11-25 10:36:40.713612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.791 [2024-11-25 10:36:40.713627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:33.791 [2024-11-25 10:36:40.713638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.791 [2024-11-25 10:36:40.713649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.791 [2024-11-25 10:36:40.713686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.791 [2024-11-25 10:36:40.713698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:33.791 [2024-11-25 10:36:40.713708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.791 [2024-11-25 10:36:40.713718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.791 [2024-11-25 10:36:40.713760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.791 [2024-11-25 10:36:40.713772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:33.791 [2024-11-25 10:36:40.713783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.791 [2024-11-25 10:36:40.713793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.791 [2024-11-25 10:36:40.713835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.791 [2024-11-25 10:36:40.713847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:33.791 [2024-11-25 10:36:40.713857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.791 [2024-11-25 10:36:40.713867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.791 [2024-11-25 10:36:40.713985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 668.884 ms, result 0 00:31:35.164 00:31:35.164 00:31:35.164 10:36:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:37.063 10:36:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:37.063 [2024-11-25 10:36:43.897551] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:31:37.063 [2024-11-25 10:36:43.897681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82373 ] 00:31:37.063 [2024-11-25 10:36:44.059826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.320 [2024-11-25 10:36:44.179196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.581 [2024-11-25 10:36:44.510181] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:37.581 [2024-11-25 10:36:44.510252] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:37.581 [2024-11-25 10:36:44.671195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.581 [2024-11-25 10:36:44.671262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:37.581 [2024-11-25 10:36:44.671278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:37.581 [2024-11-25 10:36:44.671289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.581 [2024-11-25 10:36:44.671340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.581 [2024-11-25 10:36:44.671356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:37.581 [2024-11-25 10:36:44.671367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:37.581 [2024-11-25 10:36:44.671378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.581 [2024-11-25 10:36:44.671401] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:37.581 [2024-11-25 10:36:44.672372] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:37.581 [2024-11-25 10:36:44.672404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.581 [2024-11-25 10:36:44.672416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:37.581 [2024-11-25 10:36:44.672429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:31:37.581 [2024-11-25 10:36:44.672440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.581 [2024-11-25 10:36:44.673934] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:37.841 [2024-11-25 10:36:44.692415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.841 [2024-11-25 10:36:44.692458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:37.841 [2024-11-25 10:36:44.692472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.511 ms 00:31:37.841 [2024-11-25 10:36:44.692484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.841 [2024-11-25 10:36:44.692562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.841 [2024-11-25 10:36:44.692576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:37.841 [2024-11-25 10:36:44.692588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:37.841 [2024-11-25 10:36:44.692599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.841 [2024-11-25 10:36:44.699485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.841 [2024-11-25 10:36:44.699522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:37.841 [2024-11-25 10:36:44.699536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.821 ms 00:31:37.841 [2024-11-25 10:36:44.699550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.841 [2024-11-25 10:36:44.699629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.841 [2024-11-25 10:36:44.699644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:37.841 [2024-11-25 10:36:44.699656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:37.841 [2024-11-25 10:36:44.699667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.841 [2024-11-25 10:36:44.699707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.841 [2024-11-25 10:36:44.699721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:37.841 [2024-11-25 10:36:44.699732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:37.841 [2024-11-25 10:36:44.699742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.841 [2024-11-25 10:36:44.699771] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:37.841 [2024-11-25 10:36:44.704621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.841 [2024-11-25 10:36:44.704657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:37.841 [2024-11-25 10:36:44.704674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.865 ms 00:31:37.841 [2024-11-25 10:36:44.704685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.841 [2024-11-25 10:36:44.704716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.841 [2024-11-25 10:36:44.704728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:37.841 [2024-11-25 10:36:44.704739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:37.841 [2024-11-25 10:36:44.704749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.841 [2024-11-25 10:36:44.704804] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:37.841 [2024-11-25 10:36:44.704829] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:37.841 [2024-11-25 10:36:44.704866] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:37.841 [2024-11-25 10:36:44.704887] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:37.841 [2024-11-25 10:36:44.704977] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:37.841 [2024-11-25 10:36:44.704992] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:37.841 [2024-11-25 10:36:44.705006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:37.842 [2024-11-25 10:36:44.705020] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705033] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705045] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:37.842 [2024-11-25 10:36:44.705055] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:37.842 [2024-11-25 10:36:44.705066] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:37.842 [2024-11-25 10:36:44.705081] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:37.842 [2024-11-25 10:36:44.705093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.842 [2024-11-25 10:36:44.705104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:37.842 [2024-11-25 10:36:44.705114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:31:37.842 [2024-11-25 10:36:44.705125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.842 [2024-11-25 10:36:44.705196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.842 [2024-11-25 10:36:44.705215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:37.842 [2024-11-25 10:36:44.705226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:37.842 [2024-11-25 10:36:44.705237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.842 [2024-11-25 10:36:44.705334] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:37.842 [2024-11-25 10:36:44.705357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:37.842 [2024-11-25 10:36:44.705369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:37.842 [2024-11-25 10:36:44.705409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:37.842 [2024-11-25 10:36:44.705442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:37.842 [2024-11-25 10:36:44.705463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:37.842 [2024-11-25 10:36:44.705474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:37.842 [2024-11-25 10:36:44.705484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:37.842 [2024-11-25 10:36:44.705518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:37.842 [2024-11-25 10:36:44.705528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:37.842 [2024-11-25 10:36:44.705538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:37.842 [2024-11-25 10:36:44.705558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:37.842 [2024-11-25 10:36:44.705587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:37.842 [2024-11-25 10:36:44.705618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:37.842 [2024-11-25 10:36:44.705646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:37.842 [2024-11-25 10:36:44.705673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:37.842 [2024-11-25 10:36:44.705701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:37.842 [2024-11-25 10:36:44.705720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:37.842 [2024-11-25 10:36:44.705729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:37.842 [2024-11-25 10:36:44.705738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:37.842 [2024-11-25 10:36:44.705748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:37.842 [2024-11-25 10:36:44.705757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:37.842 [2024-11-25 10:36:44.705765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:37.842 [2024-11-25 10:36:44.705783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:37.842 [2024-11-25 10:36:44.705793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705803] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:37.842 [2024-11-25 10:36:44.705814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:37.842 [2024-11-25 10:36:44.705824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:37.842 [2024-11-25 10:36:44.705845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:37.842 [2024-11-25 10:36:44.705855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:37.842 [2024-11-25 10:36:44.705865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:37.842 [2024-11-25 10:36:44.705875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:37.842 [2024-11-25 10:36:44.705884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:37.842 [2024-11-25 10:36:44.705894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:37.842 [2024-11-25 10:36:44.705905] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:37.842 [2024-11-25 10:36:44.705918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:37.842 [2024-11-25 10:36:44.705933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:37.842 [2024-11-25 10:36:44.705944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:37.842 [2024-11-25 10:36:44.705956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:37.842 [2024-11-25 10:36:44.705966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:37.842 [2024-11-25 10:36:44.705976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:37.842 [2024-11-25 10:36:44.705987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:37.842 [2024-11-25 10:36:44.705998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:37.842 [2024-11-25 10:36:44.706009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:37.842 [2024-11-25 10:36:44.706020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:37.842 [2024-11-25 10:36:44.706030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:37.842 [2024-11-25 10:36:44.706041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:37.842 [2024-11-25 10:36:44.706051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:37.842 [2024-11-25 10:36:44.706062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:37.842 [2024-11-25 10:36:44.706072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:37.842 [2024-11-25 10:36:44.706086] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:37.842 [2024-11-25 10:36:44.706098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:37.842 [2024-11-25 10:36:44.706111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:37.842 [2024-11-25 10:36:44.706122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:37.842 [2024-11-25 10:36:44.706134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:37.842 [2024-11-25 10:36:44.706146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:37.842 [2024-11-25 10:36:44.706158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.842 [2024-11-25 10:36:44.706169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:37.842 [2024-11-25 10:36:44.706180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:31:37.842 [2024-11-25 10:36:44.706190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.842 [2024-11-25 10:36:44.742190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.842 [2024-11-25 10:36:44.742240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:37.842 [2024-11-25 10:36:44.742255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.009 ms 00:31:37.842 [2024-11-25 10:36:44.742270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.842 [2024-11-25 10:36:44.742360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.742373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:37.843 [2024-11-25 10:36:44.742384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:37.843 [2024-11-25 10:36:44.742394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.843 [2024-11-25 10:36:44.794791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.794835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:37.843 [2024-11-25 10:36:44.794850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.404 ms 00:31:37.843 [2024-11-25 10:36:44.794869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.843 [2024-11-25 10:36:44.794925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.794937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:37.843 [2024-11-25 10:36:44.794954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:37.843 [2024-11-25 10:36:44.794965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.843 [2024-11-25 10:36:44.795460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.795484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:37.843 [2024-11-25 10:36:44.795509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:31:37.843 [2024-11-25 10:36:44.795520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.843 [2024-11-25 10:36:44.795642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.795657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:37.843 [2024-11-25 10:36:44.795673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:31:37.843 [2024-11-25 10:36:44.795684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.843 [2024-11-25 10:36:44.813694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.813736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:37.843 [2024-11-25 10:36:44.813753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.015 ms 00:31:37.843 [2024-11-25 10:36:44.813764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.843 [2024-11-25 10:36:44.832572] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:37.843 [2024-11-25 10:36:44.832617] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:37.843 [2024-11-25 10:36:44.832634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.832645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:37.843 [2024-11-25 10:36:44.832657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.771 ms 00:31:37.843 [2024-11-25 10:36:44.832668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.843 [2024-11-25 10:36:44.862911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.862959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:37.843 [2024-11-25 10:36:44.862974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.240 ms 00:31:37.843 [2024-11-25 10:36:44.862985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.843 [2024-11-25 10:36:44.882010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.882057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:37.843 [2024-11-25 10:36:44.882071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.002 ms 00:31:37.843 [2024-11-25 10:36:44.882082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.843 [2024-11-25 10:36:44.900797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.900840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:37.843 [2024-11-25 10:36:44.900853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.702 ms 00:31:37.843 [2024-11-25 10:36:44.900864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.843 [2024-11-25 10:36:44.901671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.843 [2024-11-25 10:36:44.901699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:37.843 [2024-11-25 10:36:44.901716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:31:37.843 [2024-11-25 10:36:44.901727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.101 [2024-11-25 10:36:44.987376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.101 [2024-11-25 10:36:44.987441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:38.101 [2024-11-25 10:36:44.987464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.761 ms 00:31:38.101 [2024-11-25 10:36:44.987475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.101 [2024-11-25 10:36:44.998983] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:38.101 [2024-11-25 10:36:45.002208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.101 [2024-11-25 10:36:45.002246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:38.101 [2024-11-25 10:36:45.002262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.645 ms 00:31:38.101 [2024-11-25 10:36:45.002274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.101 [2024-11-25 10:36:45.002380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.101 [2024-11-25 10:36:45.002395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:38.101 [2024-11-25 10:36:45.002407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:38.101 [2024-11-25 10:36:45.002422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.101 [2024-11-25 10:36:45.003902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.101 [2024-11-25 10:36:45.003942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:38.101 [2024-11-25 10:36:45.003955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:31:38.101 [2024-11-25 10:36:45.003965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.101 [2024-11-25 10:36:45.004006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.101 [2024-11-25 10:36:45.004017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:38.101 [2024-11-25 10:36:45.004029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:38.101 [2024-11-25 10:36:45.004039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.101 [2024-11-25 10:36:45.004081] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:38.101 [2024-11-25 10:36:45.004094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.101 [2024-11-25 10:36:45.004105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:38.101 [2024-11-25 10:36:45.004116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:38.101 [2024-11-25 10:36:45.004127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.101 [2024-11-25 10:36:45.040413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.101 [2024-11-25 10:36:45.040460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:38.101 [2024-11-25 10:36:45.040475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.323 ms 00:31:38.101 [2024-11-25 10:36:45.040499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.102 [2024-11-25 10:36:45.040579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.102 [2024-11-25 10:36:45.040593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:38.102 [2024-11-25 10:36:45.040605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:38.102 [2024-11-25 10:36:45.040615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.102 [2024-11-25 10:36:45.041691] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.621 ms, result 0 00:31:39.478  [2024-11-25T10:36:47.529Z] Copying: 1276/1048576 [kB] (1276 kBps) [2024-11-25T10:36:48.470Z] Copying: 9480/1048576 [kB] (8204 kBps) [2024-11-25T10:36:49.423Z] Copying: 42/1024 [MB] (33 MBps) [2024-11-25T10:36:50.358Z] Copying: 77/1024 [MB] (34 MBps) [2024-11-25T10:36:51.294Z] Copying: 110/1024 [MB] (33 MBps) [2024-11-25T10:36:52.669Z] Copying: 144/1024 [MB] (33 MBps) [2024-11-25T10:36:53.604Z] Copying: 176/1024 [MB] (32 MBps) [2024-11-25T10:36:54.539Z] Copying: 210/1024 [MB] (33 MBps) [2024-11-25T10:36:55.475Z] Copying: 243/1024 [MB] (32 MBps) [2024-11-25T10:36:56.409Z] Copying: 276/1024 [MB] (33 MBps) [2024-11-25T10:36:57.344Z] Copying: 309/1024 [MB] (32 MBps) [2024-11-25T10:36:58.277Z] Copying: 340/1024 [MB] (31 MBps) [2024-11-25T10:36:59.652Z] Copying: 374/1024 [MB] (33 MBps) [2024-11-25T10:37:00.585Z] Copying: 406/1024 [MB] (32 MBps) [2024-11-25T10:37:01.551Z] Copying: 439/1024 [MB] (32 MBps) [2024-11-25T10:37:02.491Z] Copying: 473/1024 [MB] (34 MBps) [2024-11-25T10:37:03.428Z] Copying: 508/1024 [MB] (34 MBps) [2024-11-25T10:37:04.367Z] Copying: 540/1024 [MB] (32 MBps) [2024-11-25T10:37:05.305Z] Copying: 575/1024 [MB] (35 MBps) [2024-11-25T10:37:06.242Z] Copying: 609/1024 [MB] (33 MBps) [2024-11-25T10:37:07.623Z] Copying: 643/1024 [MB] (33 MBps) [2024-11-25T10:37:08.561Z] Copying: 676/1024 [MB] (33 MBps) [2024-11-25T10:37:09.501Z] Copying: 708/1024 [MB] (32 MBps) [2024-11-25T10:37:10.439Z] Copying: 741/1024 [MB] (32 MBps) [2024-11-25T10:37:11.374Z] Copying: 775/1024 [MB] (33 MBps) [2024-11-25T10:37:12.361Z] Copying: 808/1024 [MB] (33 MBps) [2024-11-25T10:37:13.295Z] Copying: 842/1024 [MB] (33 MBps) [2024-11-25T10:37:14.230Z] Copying: 876/1024 [MB] (34 MBps) [2024-11-25T10:37:15.607Z] Copying: 909/1024 [MB] (33 MBps) [2024-11-25T10:37:16.545Z] Copying: 943/1024 [MB] (34 MBps) [2024-11-25T10:37:17.482Z] Copying: 977/1024 [MB] (33 MBps) [2024-11-25T10:37:17.742Z] Copying: 1011/1024 [MB] (33 MBps) [2024-11-25T10:37:18.312Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-25 10:37:18.002810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.002895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:11.200 [2024-11-25 10:37:18.002923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:11.200 [2024-11-25 10:37:18.002942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.002982] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:11.200 [2024-11-25 10:37:18.009045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.009096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:11.200 [2024-11-25 10:37:18.009112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.038 ms 00:32:11.200 [2024-11-25 10:37:18.009126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.009465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.009507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:11.200 [2024-11-25 10:37:18.009523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:32:11.200 [2024-11-25 10:37:18.009535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.020653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.020707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:11.200 [2024-11-25 10:37:18.020722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.114 ms 00:32:11.200 [2024-11-25 10:37:18.020734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.025780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.025814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:11.200 [2024-11-25 10:37:18.025834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.018 ms 00:32:11.200 [2024-11-25 10:37:18.025844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.062179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.062220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:11.200 [2024-11-25 10:37:18.062234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.333 ms 00:32:11.200 [2024-11-25 10:37:18.062244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.083639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.083680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:11.200 [2024-11-25 10:37:18.083694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.388 ms 00:32:11.200 [2024-11-25 10:37:18.083706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.085796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.085835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:11.200 [2024-11-25 10:37:18.085848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:32:11.200 [2024-11-25 10:37:18.085859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.122790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.122828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:11.200 [2024-11-25 10:37:18.122842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.965 ms 00:32:11.200 [2024-11-25 10:37:18.122852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.159381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.159420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:11.200 [2024-11-25 10:37:18.159434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.545 ms 00:32:11.200 [2024-11-25 10:37:18.159443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.195498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.195551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:11.200 [2024-11-25 10:37:18.195565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.065 ms 00:32:11.200 [2024-11-25 10:37:18.195576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.231906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.200 [2024-11-25 10:37:18.231944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:11.200 [2024-11-25 10:37:18.231958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.310 ms 00:32:11.200 [2024-11-25 10:37:18.231969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.200 [2024-11-25 10:37:18.232007] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:11.200 [2024-11-25 10:37:18.232024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:11.200 [2024-11-25 10:37:18.232039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:11.200 [2024-11-25 10:37:18.232051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:11.200 [2024-11-25 10:37:18.232063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:11.200 [2024-11-25 10:37:18.232075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:11.200 [2024-11-25 10:37:18.232086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:11.200 [2024-11-25 10:37:18.232098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:11.200 [2024-11-25 10:37:18.232110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:11.200 [2024-11-25 10:37:18.232121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:11.200 [2024-11-25 10:37:18.232132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.232996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:11.201 [2024-11-25 10:37:18.233136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:11.202 [2024-11-25 10:37:18.233155] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:11.202 [2024-11-25 10:37:18.233165] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da427e63-7cad-4dac-b19a-5c4ed8c3c31c 00:32:11.202 [2024-11-25 10:37:18.233177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:11.202 [2024-11-25 10:37:18.233187] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158144 00:32:11.202 [2024-11-25 10:37:18.233202] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156160 00:32:11.202 [2024-11-25 10:37:18.233213] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:32:11.202 [2024-11-25 10:37:18.233224] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:11.202 [2024-11-25 10:37:18.233246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:11.202 [2024-11-25 10:37:18.233257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:11.202 [2024-11-25 10:37:18.233266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:11.202 [2024-11-25 10:37:18.233275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:11.202 [2024-11-25 10:37:18.233285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.202 [2024-11-25 10:37:18.233308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:11.202 [2024-11-25 10:37:18.233319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:32:11.202 [2024-11-25 10:37:18.233329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.202 [2024-11-25 10:37:18.253087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.202 [2024-11-25 10:37:18.253131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:11.202 [2024-11-25 10:37:18.253145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.752 ms 00:32:11.202 [2024-11-25 10:37:18.253156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.202 [2024-11-25 10:37:18.253709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.202 [2024-11-25 10:37:18.253731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:11.202 [2024-11-25 10:37:18.253743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:32:11.202 [2024-11-25 10:37:18.253754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.202 [2024-11-25 10:37:18.304846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.202 [2024-11-25 10:37:18.304885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:11.202 [2024-11-25 10:37:18.304899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.202 [2024-11-25 10:37:18.304910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.202 [2024-11-25 10:37:18.304961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.202 [2024-11-25 10:37:18.304973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:11.202 [2024-11-25 10:37:18.304984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.202 [2024-11-25 10:37:18.304994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.202 [2024-11-25 10:37:18.305065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.202 [2024-11-25 10:37:18.305080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:11.202 [2024-11-25 10:37:18.305091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.202 [2024-11-25 10:37:18.305101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.202 [2024-11-25 10:37:18.305119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.202 [2024-11-25 10:37:18.305130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:11.202 [2024-11-25 10:37:18.305141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.202 [2024-11-25 10:37:18.305151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.462 [2024-11-25 10:37:18.425516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.462 [2024-11-25 10:37:18.425572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:11.462 [2024-11-25 10:37:18.425587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.462 [2024-11-25 10:37:18.425597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.462 [2024-11-25 10:37:18.524837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.462 [2024-11-25 10:37:18.524891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:11.462 [2024-11-25 10:37:18.524906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.462 [2024-11-25 10:37:18.524917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.462 [2024-11-25 10:37:18.525015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.462 [2024-11-25 10:37:18.525031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:11.462 [2024-11-25 10:37:18.525042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.462 [2024-11-25 10:37:18.525055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.462 [2024-11-25 10:37:18.525093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.462 [2024-11-25 10:37:18.525104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:11.462 [2024-11-25 10:37:18.525115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.462 [2024-11-25 10:37:18.525125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.462 [2024-11-25 10:37:18.525233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.462 [2024-11-25 10:37:18.525249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:11.462 [2024-11-25 10:37:18.525263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.462 [2024-11-25 10:37:18.525274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.462 [2024-11-25 10:37:18.525308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.462 [2024-11-25 10:37:18.525320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:11.462 [2024-11-25 10:37:18.525331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.462 [2024-11-25 10:37:18.525341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.462 [2024-11-25 10:37:18.525377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.462 [2024-11-25 10:37:18.525389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:11.462 [2024-11-25 10:37:18.525412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.462 [2024-11-25 10:37:18.525422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.462 [2024-11-25 10:37:18.525461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:11.462 [2024-11-25 10:37:18.525474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:11.462 [2024-11-25 10:37:18.525484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:11.462 [2024-11-25 10:37:18.525516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.462 [2024-11-25 10:37:18.525672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 523.680 ms, result 0 00:32:12.839 00:32:12.839 00:32:12.839 10:37:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:14.741 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:14.741 10:37:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:14.741 [2024-11-25 10:37:21.442548] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:32:14.741 [2024-11-25 10:37:21.442668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82744 ] 00:32:14.741 [2024-11-25 10:37:21.623608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.741 [2024-11-25 10:37:21.742329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.331 [2024-11-25 10:37:22.116209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:15.331 [2024-11-25 10:37:22.116276] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:15.331 [2024-11-25 10:37:22.277800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.277855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:15.331 [2024-11-25 10:37:22.277870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:15.331 [2024-11-25 10:37:22.277881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.277928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.277943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:15.331 [2024-11-25 10:37:22.277954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:15.331 [2024-11-25 10:37:22.277964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.277986] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:15.331 [2024-11-25 10:37:22.278989] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:15.331 [2024-11-25 10:37:22.279019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.279030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:15.331 [2024-11-25 10:37:22.279042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:32:15.331 [2024-11-25 10:37:22.279052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.280458] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:15.331 [2024-11-25 10:37:22.300122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.300164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:15.331 [2024-11-25 10:37:22.300178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.697 ms 00:32:15.331 [2024-11-25 10:37:22.300189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.300252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.300264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:15.331 [2024-11-25 10:37:22.300276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:15.331 [2024-11-25 10:37:22.300295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.306993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.307023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:15.331 [2024-11-25 10:37:22.307035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.636 ms 00:32:15.331 [2024-11-25 10:37:22.307050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.307125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.307139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:15.331 [2024-11-25 10:37:22.307150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:15.331 [2024-11-25 10:37:22.307160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.307198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.307210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:15.331 [2024-11-25 10:37:22.307221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:15.331 [2024-11-25 10:37:22.307230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.307257] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:15.331 [2024-11-25 10:37:22.312074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.312107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:15.331 [2024-11-25 10:37:22.312123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.829 ms 00:32:15.331 [2024-11-25 10:37:22.312133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.312164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.312175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:15.331 [2024-11-25 10:37:22.312186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:15.331 [2024-11-25 10:37:22.312196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.312247] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:15.331 [2024-11-25 10:37:22.312271] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:15.331 [2024-11-25 10:37:22.312306] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:15.331 [2024-11-25 10:37:22.312326] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:15.331 [2024-11-25 10:37:22.312414] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:15.331 [2024-11-25 10:37:22.312427] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:15.331 [2024-11-25 10:37:22.312441] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:15.331 [2024-11-25 10:37:22.312455] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:15.331 [2024-11-25 10:37:22.312467] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:15.331 [2024-11-25 10:37:22.312478] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:15.331 [2024-11-25 10:37:22.312503] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:15.331 [2024-11-25 10:37:22.312514] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:15.331 [2024-11-25 10:37:22.312527] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:15.331 [2024-11-25 10:37:22.312539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.312550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:15.331 [2024-11-25 10:37:22.312560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:32:15.331 [2024-11-25 10:37:22.312570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.312641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.331 [2024-11-25 10:37:22.312652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:15.331 [2024-11-25 10:37:22.312662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:15.331 [2024-11-25 10:37:22.312673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.331 [2024-11-25 10:37:22.312768] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:15.331 [2024-11-25 10:37:22.312784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:15.331 [2024-11-25 10:37:22.312794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:15.331 [2024-11-25 10:37:22.312805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.331 [2024-11-25 10:37:22.312815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:15.331 [2024-11-25 10:37:22.312824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:15.331 [2024-11-25 10:37:22.312834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:15.331 [2024-11-25 10:37:22.312844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:15.331 [2024-11-25 10:37:22.312854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:15.331 [2024-11-25 10:37:22.312865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:15.331 [2024-11-25 10:37:22.312876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:15.331 [2024-11-25 10:37:22.312886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:15.331 [2024-11-25 10:37:22.312895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:15.331 [2024-11-25 10:37:22.312914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:15.331 [2024-11-25 10:37:22.312923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:15.331 [2024-11-25 10:37:22.312933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.331 [2024-11-25 10:37:22.312942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:15.331 [2024-11-25 10:37:22.312951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:15.332 [2024-11-25 10:37:22.312961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.332 [2024-11-25 10:37:22.312971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:15.332 [2024-11-25 10:37:22.312980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:15.332 [2024-11-25 10:37:22.312989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.332 [2024-11-25 10:37:22.312998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:15.332 [2024-11-25 10:37:22.313007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:15.332 [2024-11-25 10:37:22.313016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.332 [2024-11-25 10:37:22.313025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:15.332 [2024-11-25 10:37:22.313034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:15.332 [2024-11-25 10:37:22.313044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.332 [2024-11-25 10:37:22.313053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:15.332 [2024-11-25 10:37:22.313062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:15.332 [2024-11-25 10:37:22.313071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.332 [2024-11-25 10:37:22.313080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:15.332 [2024-11-25 10:37:22.313090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:15.332 [2024-11-25 10:37:22.313099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:15.332 [2024-11-25 10:37:22.313108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:15.332 [2024-11-25 10:37:22.313117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:15.332 [2024-11-25 10:37:22.313126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:15.332 [2024-11-25 10:37:22.313135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:15.332 [2024-11-25 10:37:22.313143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:15.332 [2024-11-25 10:37:22.313152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.332 [2024-11-25 10:37:22.313162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:15.332 [2024-11-25 10:37:22.313171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:15.332 [2024-11-25 10:37:22.313183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.332 [2024-11-25 10:37:22.313191] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:15.332 [2024-11-25 10:37:22.313201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:15.332 [2024-11-25 10:37:22.313210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:15.332 [2024-11-25 10:37:22.313220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.332 [2024-11-25 10:37:22.313230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:15.332 [2024-11-25 10:37:22.313239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:15.332 [2024-11-25 10:37:22.313248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:15.332 [2024-11-25 10:37:22.313257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:15.332 [2024-11-25 10:37:22.313266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:15.332 [2024-11-25 10:37:22.313275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:15.332 [2024-11-25 10:37:22.313286] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:15.332 [2024-11-25 10:37:22.313299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:15.332 [2024-11-25 10:37:22.313313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:15.332 [2024-11-25 10:37:22.313324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:15.332 [2024-11-25 10:37:22.313334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:15.332 [2024-11-25 10:37:22.313344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:15.332 [2024-11-25 10:37:22.313354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:15.332 [2024-11-25 10:37:22.313364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:15.332 [2024-11-25 10:37:22.313375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:15.332 [2024-11-25 10:37:22.313384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:15.332 [2024-11-25 10:37:22.313402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:15.332 [2024-11-25 10:37:22.313412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:15.332 [2024-11-25 10:37:22.313423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:15.332 [2024-11-25 10:37:22.313432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:15.332 [2024-11-25 10:37:22.313442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:15.332 [2024-11-25 10:37:22.313453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:15.332 [2024-11-25 10:37:22.313463] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:15.332 [2024-11-25 10:37:22.313474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:15.332 [2024-11-25 10:37:22.313486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:15.332 [2024-11-25 10:37:22.313507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:15.332 [2024-11-25 10:37:22.313518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:15.332 [2024-11-25 10:37:22.313530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:15.332 [2024-11-25 10:37:22.313541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.332 [2024-11-25 10:37:22.313552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:15.332 [2024-11-25 10:37:22.313562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:32:15.332 [2024-11-25 10:37:22.313572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.332 [2024-11-25 10:37:22.355281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.332 [2024-11-25 10:37:22.355317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:15.332 [2024-11-25 10:37:22.355330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.732 ms 00:32:15.332 [2024-11-25 10:37:22.355345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.332 [2024-11-25 10:37:22.355421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.332 [2024-11-25 10:37:22.355433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:15.332 [2024-11-25 10:37:22.355445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:32:15.332 [2024-11-25 10:37:22.355456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.332 [2024-11-25 10:37:22.415158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.332 [2024-11-25 10:37:22.415198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:15.332 [2024-11-25 10:37:22.415211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.733 ms 00:32:15.332 [2024-11-25 10:37:22.415222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.332 [2024-11-25 10:37:22.415256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.332 [2024-11-25 10:37:22.415268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:15.332 [2024-11-25 10:37:22.415283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:15.332 [2024-11-25 10:37:22.415293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.332 [2024-11-25 10:37:22.415777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.332 [2024-11-25 10:37:22.415801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:15.332 [2024-11-25 10:37:22.415813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:32:15.332 [2024-11-25 10:37:22.415824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.332 [2024-11-25 10:37:22.415942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.332 [2024-11-25 10:37:22.415956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:15.332 [2024-11-25 10:37:22.415970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:32:15.332 [2024-11-25 10:37:22.415980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.432923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.432960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:15.592 [2024-11-25 10:37:22.432977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.950 ms 00:32:15.592 [2024-11-25 10:37:22.432988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.451388] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:15.592 [2024-11-25 10:37:22.451434] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:15.592 [2024-11-25 10:37:22.451449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.451460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:15.592 [2024-11-25 10:37:22.451471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.393 ms 00:32:15.592 [2024-11-25 10:37:22.451481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.481534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.481574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:15.592 [2024-11-25 10:37:22.481588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.055 ms 00:32:15.592 [2024-11-25 10:37:22.481598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.500011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.500051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:15.592 [2024-11-25 10:37:22.500064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.383 ms 00:32:15.592 [2024-11-25 10:37:22.500074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.518218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.518258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:15.592 [2024-11-25 10:37:22.518271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.130 ms 00:32:15.592 [2024-11-25 10:37:22.518280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.519003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.519033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:15.592 [2024-11-25 10:37:22.519049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:32:15.592 [2024-11-25 10:37:22.519059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.603195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.603258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:15.592 [2024-11-25 10:37:22.603279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.251 ms 00:32:15.592 [2024-11-25 10:37:22.603290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.613993] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:15.592 [2024-11-25 10:37:22.616241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.616270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:15.592 [2024-11-25 10:37:22.616284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.919 ms 00:32:15.592 [2024-11-25 10:37:22.616293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.616369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.616382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:15.592 [2024-11-25 10:37:22.616393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:15.592 [2024-11-25 10:37:22.616407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.617254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.617277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:15.592 [2024-11-25 10:37:22.617287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:32:15.592 [2024-11-25 10:37:22.617297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.592 [2024-11-25 10:37:22.617319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.592 [2024-11-25 10:37:22.617330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:15.593 [2024-11-25 10:37:22.617340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:15.593 [2024-11-25 10:37:22.617350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.593 [2024-11-25 10:37:22.617385] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:15.593 [2024-11-25 10:37:22.617405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.593 [2024-11-25 10:37:22.617415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:15.593 [2024-11-25 10:37:22.617426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:15.593 [2024-11-25 10:37:22.617435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.593 [2024-11-25 10:37:22.654033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.593 [2024-11-25 10:37:22.654079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:15.593 [2024-11-25 10:37:22.654093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.638 ms 00:32:15.593 [2024-11-25 10:37:22.654110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.593 [2024-11-25 10:37:22.654183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.593 [2024-11-25 10:37:22.654196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:15.593 [2024-11-25 10:37:22.654207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:15.593 [2024-11-25 10:37:22.654217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.593 [2024-11-25 10:37:22.655280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 377.667 ms, result 0 00:32:16.972  [2024-11-25T10:37:25.023Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-25T10:37:25.958Z] Copying: 52/1024 [MB] (26 MBps) [2024-11-25T10:37:26.897Z] Copying: 79/1024 [MB] (26 MBps) [2024-11-25T10:37:28.276Z] Copying: 105/1024 [MB] (26 MBps) [2024-11-25T10:37:29.213Z] Copying: 132/1024 [MB] (26 MBps) [2024-11-25T10:37:30.149Z] Copying: 158/1024 [MB] (26 MBps) [2024-11-25T10:37:31.086Z] Copying: 185/1024 [MB] (26 MBps) [2024-11-25T10:37:32.025Z] Copying: 213/1024 [MB] (27 MBps) [2024-11-25T10:37:32.963Z] Copying: 241/1024 [MB] (27 MBps) [2024-11-25T10:37:33.901Z] Copying: 268/1024 [MB] (27 MBps) [2024-11-25T10:37:35.352Z] Copying: 296/1024 [MB] (27 MBps) [2024-11-25T10:37:35.920Z] Copying: 324/1024 [MB] (27 MBps) [2024-11-25T10:37:36.858Z] Copying: 353/1024 [MB] (28 MBps) [2024-11-25T10:37:38.237Z] Copying: 381/1024 [MB] (27 MBps) [2024-11-25T10:37:39.175Z] Copying: 409/1024 [MB] (28 MBps) [2024-11-25T10:37:40.110Z] Copying: 437/1024 [MB] (28 MBps) [2024-11-25T10:37:41.048Z] Copying: 465/1024 [MB] (28 MBps) [2024-11-25T10:37:41.985Z] Copying: 493/1024 [MB] (28 MBps) [2024-11-25T10:37:42.920Z] Copying: 521/1024 [MB] (27 MBps) [2024-11-25T10:37:43.857Z] Copying: 549/1024 [MB] (27 MBps) [2024-11-25T10:37:45.237Z] Copying: 576/1024 [MB] (27 MBps) [2024-11-25T10:37:46.188Z] Copying: 604/1024 [MB] (27 MBps) [2024-11-25T10:37:47.125Z] Copying: 631/1024 [MB] (27 MBps) [2024-11-25T10:37:48.086Z] Copying: 658/1024 [MB] (27 MBps) [2024-11-25T10:37:49.023Z] Copying: 686/1024 [MB] (27 MBps) [2024-11-25T10:37:49.962Z] Copying: 713/1024 [MB] (26 MBps) [2024-11-25T10:37:50.899Z] Copying: 739/1024 [MB] (25 MBps) [2024-11-25T10:37:51.858Z] Copying: 766/1024 [MB] (27 MBps) [2024-11-25T10:37:53.233Z] Copying: 794/1024 [MB] (27 MBps) [2024-11-25T10:37:54.170Z] Copying: 822/1024 [MB] (27 MBps) [2024-11-25T10:37:55.105Z] Copying: 850/1024 [MB] (27 MBps) [2024-11-25T10:37:56.041Z] Copying: 878/1024 [MB] (28 MBps) [2024-11-25T10:37:56.976Z] Copying: 905/1024 [MB] (26 MBps) [2024-11-25T10:37:57.913Z] Copying: 932/1024 [MB] (27 MBps) [2024-11-25T10:37:58.847Z] Copying: 961/1024 [MB] (28 MBps) [2024-11-25T10:38:00.232Z] Copying: 989/1024 [MB] (28 MBps) [2024-11-25T10:38:00.232Z] Copying: 1018/1024 [MB] (28 MBps) [2024-11-25T10:38:00.232Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-25 10:38:00.158352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.120 [2024-11-25 10:38:00.158455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:53.120 [2024-11-25 10:38:00.158484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:53.120 [2024-11-25 10:38:00.158538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.120 [2024-11-25 10:38:00.158581] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:53.120 [2024-11-25 10:38:00.166402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.120 [2024-11-25 10:38:00.166457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:53.120 [2024-11-25 10:38:00.166484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.800 ms 00:32:53.120 [2024-11-25 10:38:00.166510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.120 [2024-11-25 10:38:00.166803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.120 [2024-11-25 10:38:00.166822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:53.120 [2024-11-25 10:38:00.166839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:32:53.120 [2024-11-25 10:38:00.166854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.120 [2024-11-25 10:38:00.171308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.120 [2024-11-25 10:38:00.171345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:53.120 [2024-11-25 10:38:00.171362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.439 ms 00:32:53.120 [2024-11-25 10:38:00.171384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.120 [2024-11-25 10:38:00.177516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.120 [2024-11-25 10:38:00.177556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:53.120 [2024-11-25 10:38:00.177568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.114 ms 00:32:53.120 [2024-11-25 10:38:00.177579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.120 [2024-11-25 10:38:00.214467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.120 [2024-11-25 10:38:00.214533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:53.120 [2024-11-25 10:38:00.214548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.876 ms 00:32:53.120 [2024-11-25 10:38:00.214558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.394 [2024-11-25 10:38:00.235474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.394 [2024-11-25 10:38:00.235521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:53.394 [2024-11-25 10:38:00.235536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.910 ms 00:32:53.394 [2024-11-25 10:38:00.235546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.394 [2024-11-25 10:38:00.237679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.394 [2024-11-25 10:38:00.237716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:53.394 [2024-11-25 10:38:00.237729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.087 ms 00:32:53.394 [2024-11-25 10:38:00.237740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.394 [2024-11-25 10:38:00.273925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.394 [2024-11-25 10:38:00.273965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:53.394 [2024-11-25 10:38:00.273978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.226 ms 00:32:53.394 [2024-11-25 10:38:00.273987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.394 [2024-11-25 10:38:00.310220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.394 [2024-11-25 10:38:00.310261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:53.394 [2024-11-25 10:38:00.310289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.254 ms 00:32:53.394 [2024-11-25 10:38:00.310299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.394 [2024-11-25 10:38:00.346171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.394 [2024-11-25 10:38:00.346212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:53.394 [2024-11-25 10:38:00.346224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.892 ms 00:32:53.394 [2024-11-25 10:38:00.346234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.394 [2024-11-25 10:38:00.382510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.394 [2024-11-25 10:38:00.382552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:53.394 [2024-11-25 10:38:00.382565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.259 ms 00:32:53.394 [2024-11-25 10:38:00.382574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.394 [2024-11-25 10:38:00.382611] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:53.394 [2024-11-25 10:38:00.382634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:53.394 [2024-11-25 10:38:00.382647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:53.394 [2024-11-25 10:38:00.382658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.382996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:53.394 [2024-11-25 10:38:00.383120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:53.395 [2024-11-25 10:38:00.383699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:53.395 [2024-11-25 10:38:00.383712] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: da427e63-7cad-4dac-b19a-5c4ed8c3c31c 00:32:53.395 [2024-11-25 10:38:00.383723] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:53.395 [2024-11-25 10:38:00.383733] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:53.395 [2024-11-25 10:38:00.383742] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:53.395 [2024-11-25 10:38:00.383753] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:53.395 [2024-11-25 10:38:00.383773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:53.395 [2024-11-25 10:38:00.383783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:53.395 [2024-11-25 10:38:00.383793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:53.395 [2024-11-25 10:38:00.383802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:53.395 [2024-11-25 10:38:00.383813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:53.395 [2024-11-25 10:38:00.383823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.395 [2024-11-25 10:38:00.383832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:53.395 [2024-11-25 10:38:00.383843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:32:53.395 [2024-11-25 10:38:00.383853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.395 [2024-11-25 10:38:00.403621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.395 [2024-11-25 10:38:00.403658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:53.395 [2024-11-25 10:38:00.403671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.762 ms 00:32:53.395 [2024-11-25 10:38:00.403682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.395 [2024-11-25 10:38:00.404272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:53.395 [2024-11-25 10:38:00.404300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:53.395 [2024-11-25 10:38:00.404311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:32:53.395 [2024-11-25 10:38:00.404320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.395 [2024-11-25 10:38:00.456998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.395 [2024-11-25 10:38:00.457037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:53.395 [2024-11-25 10:38:00.457066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.395 [2024-11-25 10:38:00.457076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.395 [2024-11-25 10:38:00.457126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.395 [2024-11-25 10:38:00.457142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:53.395 [2024-11-25 10:38:00.457153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.395 [2024-11-25 10:38:00.457162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.395 [2024-11-25 10:38:00.457227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.395 [2024-11-25 10:38:00.457240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:53.395 [2024-11-25 10:38:00.457250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.395 [2024-11-25 10:38:00.457260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.395 [2024-11-25 10:38:00.457276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.395 [2024-11-25 10:38:00.457286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:53.395 [2024-11-25 10:38:00.457301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.395 [2024-11-25 10:38:00.457311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.655 [2024-11-25 10:38:00.580214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.655 [2024-11-25 10:38:00.580268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:53.655 [2024-11-25 10:38:00.580298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.655 [2024-11-25 10:38:00.580309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.655 [2024-11-25 10:38:00.680877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.655 [2024-11-25 10:38:00.680930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:53.655 [2024-11-25 10:38:00.680950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.655 [2024-11-25 10:38:00.680961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.655 [2024-11-25 10:38:00.681051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.655 [2024-11-25 10:38:00.681064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:53.655 [2024-11-25 10:38:00.681074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.655 [2024-11-25 10:38:00.681084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.655 [2024-11-25 10:38:00.681129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.655 [2024-11-25 10:38:00.681141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:53.655 [2024-11-25 10:38:00.681151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.655 [2024-11-25 10:38:00.681164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.655 [2024-11-25 10:38:00.681276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.655 [2024-11-25 10:38:00.681289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:53.655 [2024-11-25 10:38:00.681300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.655 [2024-11-25 10:38:00.681310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.655 [2024-11-25 10:38:00.681344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.655 [2024-11-25 10:38:00.681356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:53.655 [2024-11-25 10:38:00.681366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.655 [2024-11-25 10:38:00.681376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.655 [2024-11-25 10:38:00.681428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.655 [2024-11-25 10:38:00.681440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:53.655 [2024-11-25 10:38:00.681450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.655 [2024-11-25 10:38:00.681459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.655 [2024-11-25 10:38:00.681519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:53.655 [2024-11-25 10:38:00.681533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:53.655 [2024-11-25 10:38:00.681543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:53.655 [2024-11-25 10:38:00.681553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:53.655 [2024-11-25 10:38:00.681673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.156 ms, result 0 00:32:54.591 00:32:54.591 00:32:54.850 10:38:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:56.756 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80966 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80966 ']' 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80966 00:32:56.756 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80966) - No such process 00:32:56.756 Process with pid 80966 is not found 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80966 is not found' 00:32:56.756 10:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:57.015 Remove shared memory files 00:32:57.015 10:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:57.015 10:38:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:57.015 10:38:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:57.015 10:38:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:57.015 10:38:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:57.015 10:38:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:57.015 10:38:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:57.015 00:32:57.015 real 3m33.850s 00:32:57.015 user 4m1.501s 00:32:57.015 sys 0m37.714s 00:32:57.015 10:38:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.015 10:38:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:57.015 ************************************ 00:32:57.015 END TEST ftl_dirty_shutdown 00:32:57.015 ************************************ 00:32:57.015 10:38:04 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:57.015 10:38:04 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:57.015 10:38:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.015 10:38:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:57.015 ************************************ 00:32:57.015 START TEST ftl_upgrade_shutdown 00:32:57.015 ************************************ 00:32:57.015 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:57.275 * Looking for test storage... 00:32:57.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.275 --rc genhtml_branch_coverage=1 00:32:57.275 --rc genhtml_function_coverage=1 00:32:57.275 --rc genhtml_legend=1 00:32:57.275 --rc geninfo_all_blocks=1 00:32:57.275 --rc geninfo_unexecuted_blocks=1 00:32:57.275 00:32:57.275 ' 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.275 --rc genhtml_branch_coverage=1 00:32:57.275 --rc genhtml_function_coverage=1 00:32:57.275 --rc genhtml_legend=1 00:32:57.275 --rc geninfo_all_blocks=1 00:32:57.275 --rc geninfo_unexecuted_blocks=1 00:32:57.275 00:32:57.275 ' 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.275 --rc genhtml_branch_coverage=1 00:32:57.275 --rc genhtml_function_coverage=1 00:32:57.275 --rc genhtml_legend=1 00:32:57.275 --rc geninfo_all_blocks=1 00:32:57.275 --rc geninfo_unexecuted_blocks=1 00:32:57.275 00:32:57.275 ' 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.275 --rc genhtml_branch_coverage=1 00:32:57.275 --rc genhtml_function_coverage=1 00:32:57.275 --rc genhtml_legend=1 00:32:57.275 --rc geninfo_all_blocks=1 00:32:57.275 --rc geninfo_unexecuted_blocks=1 00:32:57.275 00:32:57.275 ' 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:57.275 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83244 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83244 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83244 ']' 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.276 10:38:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:57.535 [2024-11-25 10:38:04.470346] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:32:57.535 [2024-11-25 10:38:04.470476] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83244 ] 00:32:57.794 [2024-11-25 10:38:04.649418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.794 [2024-11-25 10:38:04.759099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:58.731 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:58.990 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:58.990 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:58.990 10:38:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:58.990 10:38:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:58.990 10:38:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:58.990 10:38:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:58.990 10:38:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:58.990 10:38:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:59.249 10:38:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:59.249 { 00:32:59.249 "name": "basen1", 00:32:59.249 "aliases": [ 00:32:59.249 "83c31bd6-d560-4b31-a8fe-ed16dcfa839e" 00:32:59.249 ], 00:32:59.249 "product_name": "NVMe disk", 00:32:59.249 "block_size": 4096, 00:32:59.249 "num_blocks": 1310720, 00:32:59.249 "uuid": "83c31bd6-d560-4b31-a8fe-ed16dcfa839e", 00:32:59.250 "numa_id": -1, 00:32:59.250 "assigned_rate_limits": { 00:32:59.250 "rw_ios_per_sec": 0, 00:32:59.250 "rw_mbytes_per_sec": 0, 00:32:59.250 "r_mbytes_per_sec": 0, 00:32:59.250 "w_mbytes_per_sec": 0 00:32:59.250 }, 00:32:59.250 "claimed": true, 00:32:59.250 "claim_type": "read_many_write_one", 00:32:59.250 "zoned": false, 00:32:59.250 "supported_io_types": { 00:32:59.250 "read": true, 00:32:59.250 "write": true, 00:32:59.250 "unmap": true, 00:32:59.250 "flush": true, 00:32:59.250 "reset": true, 00:32:59.250 "nvme_admin": true, 00:32:59.250 "nvme_io": true, 00:32:59.250 "nvme_io_md": false, 00:32:59.250 "write_zeroes": true, 00:32:59.250 "zcopy": false, 00:32:59.250 "get_zone_info": false, 00:32:59.250 "zone_management": false, 00:32:59.250 "zone_append": false, 00:32:59.250 "compare": true, 00:32:59.250 "compare_and_write": false, 00:32:59.250 "abort": true, 00:32:59.250 "seek_hole": false, 00:32:59.250 "seek_data": false, 00:32:59.250 "copy": true, 00:32:59.250 "nvme_iov_md": false 00:32:59.250 }, 00:32:59.250 "driver_specific": { 00:32:59.250 "nvme": [ 00:32:59.250 { 00:32:59.250 "pci_address": "0000:00:11.0", 00:32:59.250 "trid": { 00:32:59.250 "trtype": "PCIe", 00:32:59.250 "traddr": "0000:00:11.0" 00:32:59.250 }, 00:32:59.250 "ctrlr_data": { 00:32:59.250 "cntlid": 0, 00:32:59.250 "vendor_id": "0x1b36", 00:32:59.250 "model_number": "QEMU NVMe Ctrl", 00:32:59.250 "serial_number": "12341", 00:32:59.250 "firmware_revision": "8.0.0", 00:32:59.250 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:59.250 "oacs": { 00:32:59.250 "security": 0, 00:32:59.250 "format": 1, 00:32:59.250 "firmware": 0, 00:32:59.250 "ns_manage": 1 00:32:59.250 }, 00:32:59.250 "multi_ctrlr": false, 00:32:59.250 "ana_reporting": false 00:32:59.250 }, 00:32:59.250 "vs": { 00:32:59.250 "nvme_version": "1.4" 00:32:59.250 }, 00:32:59.250 "ns_data": { 00:32:59.250 "id": 1, 00:32:59.250 "can_share": false 00:32:59.250 } 00:32:59.250 } 00:32:59.250 ], 00:32:59.250 "mp_policy": "active_passive" 00:32:59.250 } 00:32:59.250 } 00:32:59.250 ]' 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:59.250 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:59.509 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=690c8a9e-712b-4ff1-89dc-77d483573e88 00:32:59.509 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:59.509 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 690c8a9e-712b-4ff1-89dc-77d483573e88 00:32:59.768 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:59.768 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=dc8293f1-9e1e-4a89-a88d-54257ae64aa9 00:32:59.768 10:38:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u dc8293f1-9e1e-4a89-a88d-54257ae64aa9 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=769373ed-2a61-4736-ac5e-34b42ff85530 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 769373ed-2a61-4736-ac5e-34b42ff85530 ]] 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 769373ed-2a61-4736-ac5e-34b42ff85530 5120 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=769373ed-2a61-4736-ac5e-34b42ff85530 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 769373ed-2a61-4736-ac5e-34b42ff85530 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=769373ed-2a61-4736-ac5e-34b42ff85530 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:00.027 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 769373ed-2a61-4736-ac5e-34b42ff85530 00:33:00.287 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:00.287 { 00:33:00.287 "name": "769373ed-2a61-4736-ac5e-34b42ff85530", 00:33:00.287 "aliases": [ 00:33:00.287 "lvs/basen1p0" 00:33:00.287 ], 00:33:00.287 "product_name": "Logical Volume", 00:33:00.287 "block_size": 4096, 00:33:00.287 "num_blocks": 5242880, 00:33:00.287 "uuid": "769373ed-2a61-4736-ac5e-34b42ff85530", 00:33:00.287 "assigned_rate_limits": { 00:33:00.287 "rw_ios_per_sec": 0, 00:33:00.287 "rw_mbytes_per_sec": 0, 00:33:00.287 "r_mbytes_per_sec": 0, 00:33:00.287 "w_mbytes_per_sec": 0 00:33:00.287 }, 00:33:00.287 "claimed": false, 00:33:00.287 "zoned": false, 00:33:00.287 "supported_io_types": { 00:33:00.287 "read": true, 00:33:00.287 "write": true, 00:33:00.287 "unmap": true, 00:33:00.287 "flush": false, 00:33:00.287 "reset": true, 00:33:00.287 "nvme_admin": false, 00:33:00.287 "nvme_io": false, 00:33:00.287 "nvme_io_md": false, 00:33:00.287 "write_zeroes": true, 00:33:00.287 "zcopy": false, 00:33:00.287 "get_zone_info": false, 00:33:00.287 "zone_management": false, 00:33:00.287 "zone_append": false, 00:33:00.287 "compare": false, 00:33:00.287 "compare_and_write": false, 00:33:00.287 "abort": false, 00:33:00.287 "seek_hole": true, 00:33:00.287 "seek_data": true, 00:33:00.287 "copy": false, 00:33:00.287 "nvme_iov_md": false 00:33:00.287 }, 00:33:00.287 "driver_specific": { 00:33:00.287 "lvol": { 00:33:00.287 "lvol_store_uuid": "dc8293f1-9e1e-4a89-a88d-54257ae64aa9", 00:33:00.287 "base_bdev": "basen1", 00:33:00.287 "thin_provision": true, 00:33:00.287 "num_allocated_clusters": 0, 00:33:00.287 "snapshot": false, 00:33:00.287 "clone": false, 00:33:00.287 "esnap_clone": false 00:33:00.287 } 00:33:00.287 } 00:33:00.287 } 00:33:00.287 ]' 00:33:00.287 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:00.287 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:00.287 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:00.287 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:33:00.287 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:33:00.287 10:38:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:33:00.287 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:33:00.287 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:33:00.287 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:33:00.546 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:33:00.546 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:33:00.546 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:33:00.805 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:33:00.805 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:33:00.805 10:38:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 769373ed-2a61-4736-ac5e-34b42ff85530 -c cachen1p0 --l2p_dram_limit 2 00:33:01.065 [2024-11-25 10:38:08.006137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.065 [2024-11-25 10:38:08.006198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:01.065 [2024-11-25 10:38:08.006217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:01.065 [2024-11-25 10:38:08.006229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.065 [2024-11-25 10:38:08.006294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.065 [2024-11-25 10:38:08.006307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:01.065 [2024-11-25 10:38:08.006320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:33:01.065 [2024-11-25 10:38:08.006331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.065 [2024-11-25 10:38:08.006355] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:01.065 [2024-11-25 10:38:08.007356] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:01.065 [2024-11-25 10:38:08.007392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.065 [2024-11-25 10:38:08.007403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:01.065 [2024-11-25 10:38:08.007416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.040 ms 00:33:01.065 [2024-11-25 10:38:08.007427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.065 [2024-11-25 10:38:08.007530] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 2e9c3c50-b3cb-4afd-b40e-e431bf764b06 00:33:01.065 [2024-11-25 10:38:08.008974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.065 [2024-11-25 10:38:08.009011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:33:01.065 [2024-11-25 10:38:08.009024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:33:01.065 [2024-11-25 10:38:08.009036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.065 [2024-11-25 10:38:08.016612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.065 [2024-11-25 10:38:08.016653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:01.065 [2024-11-25 10:38:08.016666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.538 ms 00:33:01.065 [2024-11-25 10:38:08.016678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.065 [2024-11-25 10:38:08.016724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.065 [2024-11-25 10:38:08.016739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:01.065 [2024-11-25 10:38:08.016751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:33:01.065 [2024-11-25 10:38:08.016766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.065 [2024-11-25 10:38:08.016837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.065 [2024-11-25 10:38:08.016853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:01.065 [2024-11-25 10:38:08.016867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:33:01.065 [2024-11-25 10:38:08.016882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.065 [2024-11-25 10:38:08.016907] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:01.065 [2024-11-25 10:38:08.021578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.065 [2024-11-25 10:38:08.021614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:01.065 [2024-11-25 10:38:08.021631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.682 ms 00:33:01.065 [2024-11-25 10:38:08.021641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.065 [2024-11-25 10:38:08.021673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.065 [2024-11-25 10:38:08.021684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:01.065 [2024-11-25 10:38:08.021696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:01.065 [2024-11-25 10:38:08.021706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.065 [2024-11-25 10:38:08.021752] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:33:01.065 [2024-11-25 10:38:08.021877] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:01.065 [2024-11-25 10:38:08.021897] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:01.065 [2024-11-25 10:38:08.021911] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:01.066 [2024-11-25 10:38:08.021926] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:01.066 [2024-11-25 10:38:08.021939] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:01.066 [2024-11-25 10:38:08.021953] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:01.066 [2024-11-25 10:38:08.021963] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:01.066 [2024-11-25 10:38:08.021978] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:01.066 [2024-11-25 10:38:08.021988] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:01.066 [2024-11-25 10:38:08.022002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.066 [2024-11-25 10:38:08.022012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:01.066 [2024-11-25 10:38:08.022026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:33:01.066 [2024-11-25 10:38:08.022036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.066 [2024-11-25 10:38:08.022113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.066 [2024-11-25 10:38:08.022136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:01.066 [2024-11-25 10:38:08.022150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:33:01.066 [2024-11-25 10:38:08.022160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.066 [2024-11-25 10:38:08.022257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:01.066 [2024-11-25 10:38:08.022270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:01.066 [2024-11-25 10:38:08.022283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:01.066 [2024-11-25 10:38:08.022294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:01.066 [2024-11-25 10:38:08.022317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:01.066 [2024-11-25 10:38:08.022338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:01.066 [2024-11-25 10:38:08.022350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:01.066 [2024-11-25 10:38:08.022360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:01.066 [2024-11-25 10:38:08.022381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:01.066 [2024-11-25 10:38:08.022393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:01.066 [2024-11-25 10:38:08.022414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:01.066 [2024-11-25 10:38:08.022423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:01.066 [2024-11-25 10:38:08.022449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:01.066 [2024-11-25 10:38:08.022462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:01.066 [2024-11-25 10:38:08.022484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:01.066 [2024-11-25 10:38:08.022508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:01.066 [2024-11-25 10:38:08.022521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:01.066 [2024-11-25 10:38:08.022530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:01.066 [2024-11-25 10:38:08.022543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:01.066 [2024-11-25 10:38:08.022552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:01.066 [2024-11-25 10:38:08.022564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:01.066 [2024-11-25 10:38:08.022573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:01.066 [2024-11-25 10:38:08.022585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:01.066 [2024-11-25 10:38:08.022594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:01.066 [2024-11-25 10:38:08.022606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:01.066 [2024-11-25 10:38:08.022616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:01.066 [2024-11-25 10:38:08.022630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:01.066 [2024-11-25 10:38:08.022640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:01.066 [2024-11-25 10:38:08.022661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:01.066 [2024-11-25 10:38:08.022672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:01.066 [2024-11-25 10:38:08.022693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:01.066 [2024-11-25 10:38:08.022723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:01.066 [2024-11-25 10:38:08.022735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022744] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:01.066 [2024-11-25 10:38:08.022756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:01.066 [2024-11-25 10:38:08.022767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:01.066 [2024-11-25 10:38:08.022780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:01.066 [2024-11-25 10:38:08.022791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:01.066 [2024-11-25 10:38:08.022805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:01.066 [2024-11-25 10:38:08.022815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:01.066 [2024-11-25 10:38:08.022827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:01.066 [2024-11-25 10:38:08.022837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:01.066 [2024-11-25 10:38:08.022849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:01.066 [2024-11-25 10:38:08.022863] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:01.066 [2024-11-25 10:38:08.022881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.022893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:01.066 [2024-11-25 10:38:08.022907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.022918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.022931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:01.066 [2024-11-25 10:38:08.022941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:01.066 [2024-11-25 10:38:08.022954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:01.066 [2024-11-25 10:38:08.022965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:01.066 [2024-11-25 10:38:08.022978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.022989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.023004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.023015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.023028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.023038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.023052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:01.066 [2024-11-25 10:38:08.023063] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:01.066 [2024-11-25 10:38:08.023076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.023087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:01.066 [2024-11-25 10:38:08.023100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:01.066 [2024-11-25 10:38:08.023111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:01.066 [2024-11-25 10:38:08.023123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:01.066 [2024-11-25 10:38:08.023134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:01.066 [2024-11-25 10:38:08.023147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:01.066 [2024-11-25 10:38:08.023158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.938 ms 00:33:01.066 [2024-11-25 10:38:08.023170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:01.066 [2024-11-25 10:38:08.023211] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:01.066 [2024-11-25 10:38:08.023232] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:04.353 [2024-11-25 10:38:11.310724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.353 [2024-11-25 10:38:11.310792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:04.353 [2024-11-25 10:38:11.310809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3292.846 ms 00:33:04.353 [2024-11-25 10:38:11.310823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.353 [2024-11-25 10:38:11.348579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.353 [2024-11-25 10:38:11.348639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:04.353 [2024-11-25 10:38:11.348656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.437 ms 00:33:04.353 [2024-11-25 10:38:11.348669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.353 [2024-11-25 10:38:11.348763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.353 [2024-11-25 10:38:11.348780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:04.353 [2024-11-25 10:38:11.348792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:04.353 [2024-11-25 10:38:11.348811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.353 [2024-11-25 10:38:11.394905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.353 [2024-11-25 10:38:11.394974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:04.353 [2024-11-25 10:38:11.394988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.099 ms 00:33:04.353 [2024-11-25 10:38:11.395001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.353 [2024-11-25 10:38:11.395046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.353 [2024-11-25 10:38:11.395060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:04.353 [2024-11-25 10:38:11.395072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:04.353 [2024-11-25 10:38:11.395084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.353 [2024-11-25 10:38:11.395568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.353 [2024-11-25 10:38:11.395592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:04.353 [2024-11-25 10:38:11.395614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.427 ms 00:33:04.353 [2024-11-25 10:38:11.395628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.353 [2024-11-25 10:38:11.395666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.353 [2024-11-25 10:38:11.395680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:04.353 [2024-11-25 10:38:11.395693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:04.353 [2024-11-25 10:38:11.395709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.353 [2024-11-25 10:38:11.416451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.353 [2024-11-25 10:38:11.416516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:04.353 [2024-11-25 10:38:11.416531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.754 ms 00:33:04.353 [2024-11-25 10:38:11.416544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.353 [2024-11-25 10:38:11.429135] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:04.353 [2024-11-25 10:38:11.430209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.353 [2024-11-25 10:38:11.430238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:04.353 [2024-11-25 10:38:11.430254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.593 ms 00:33:04.353 [2024-11-25 10:38:11.430265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.612 [2024-11-25 10:38:11.470161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.612 [2024-11-25 10:38:11.470215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:33:04.612 [2024-11-25 10:38:11.470233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.926 ms 00:33:04.612 [2024-11-25 10:38:11.470244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.612 [2024-11-25 10:38:11.470336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.612 [2024-11-25 10:38:11.470353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:04.612 [2024-11-25 10:38:11.470370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:33:04.612 [2024-11-25 10:38:11.470380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.612 [2024-11-25 10:38:11.506724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.612 [2024-11-25 10:38:11.506773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:33:04.612 [2024-11-25 10:38:11.506792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.347 ms 00:33:04.612 [2024-11-25 10:38:11.506803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.612 [2024-11-25 10:38:11.543307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.612 [2024-11-25 10:38:11.543350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:33:04.612 [2024-11-25 10:38:11.543367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.510 ms 00:33:04.612 [2024-11-25 10:38:11.543377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.612 [2024-11-25 10:38:11.544068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.612 [2024-11-25 10:38:11.544096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:04.612 [2024-11-25 10:38:11.544112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.647 ms 00:33:04.612 [2024-11-25 10:38:11.544126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.612 [2024-11-25 10:38:11.644223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.612 [2024-11-25 10:38:11.644278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:33:04.612 [2024-11-25 10:38:11.644301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.196 ms 00:33:04.612 [2024-11-25 10:38:11.644312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.612 [2024-11-25 10:38:11.681309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.612 [2024-11-25 10:38:11.681364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:33:04.612 [2024-11-25 10:38:11.681382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.964 ms 00:33:04.612 [2024-11-25 10:38:11.681393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.612 [2024-11-25 10:38:11.717194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.612 [2024-11-25 10:38:11.717243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:33:04.612 [2024-11-25 10:38:11.717259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.803 ms 00:33:04.612 [2024-11-25 10:38:11.717270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.871 [2024-11-25 10:38:11.753400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.871 [2024-11-25 10:38:11.753453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:04.871 [2024-11-25 10:38:11.753470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.947 ms 00:33:04.871 [2024-11-25 10:38:11.753481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.871 [2024-11-25 10:38:11.753539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.871 [2024-11-25 10:38:11.753552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:04.871 [2024-11-25 10:38:11.753569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:04.871 [2024-11-25 10:38:11.753579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.871 [2024-11-25 10:38:11.753696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.871 [2024-11-25 10:38:11.753711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:04.871 [2024-11-25 10:38:11.753725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:33:04.871 [2024-11-25 10:38:11.753735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.871 [2024-11-25 10:38:11.754823] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3754.306 ms, result 0 00:33:04.871 { 00:33:04.871 "name": "ftl", 00:33:04.871 "uuid": "2e9c3c50-b3cb-4afd-b40e-e431bf764b06" 00:33:04.871 } 00:33:04.871 10:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:33:04.871 [2024-11-25 10:38:11.973783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.130 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:33:05.130 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:33:05.388 [2024-11-25 10:38:12.425807] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:05.388 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:33:05.647 [2024-11-25 10:38:12.631340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:05.647 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:05.906 Fill FTL, iteration 1 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83367 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83367 /var/tmp/spdk.tgt.sock 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83367 ']' 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:05.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:05.906 10:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:06.165 [2024-11-25 10:38:13.083843] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:33:06.165 [2024-11-25 10:38:13.083970] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83367 ] 00:33:06.165 [2024-11-25 10:38:13.262095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.424 [2024-11-25 10:38:13.379948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.360 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.360 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:07.360 10:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:33:07.619 ftln1 00:33:07.619 10:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:33:07.619 10:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83367 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83367 ']' 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83367 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83367 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:07.879 killing process with pid 83367 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83367' 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83367 00:33:07.879 10:38:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83367 00:33:10.412 10:38:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:33:10.412 10:38:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:33:10.412 [2024-11-25 10:38:17.180320] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:33:10.412 [2024-11-25 10:38:17.180440] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83425 ] 00:33:10.412 [2024-11-25 10:38:17.357082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.412 [2024-11-25 10:38:17.472781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.321  [2024-11-25T10:38:19.999Z] Copying: 239/1024 [MB] (239 MBps) [2024-11-25T10:38:20.935Z] Copying: 478/1024 [MB] (239 MBps) [2024-11-25T10:38:22.317Z] Copying: 722/1024 [MB] (244 MBps) [2024-11-25T10:38:22.317Z] Copying: 966/1024 [MB] (244 MBps) [2024-11-25T10:38:23.696Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:33:16.584 00:33:16.584 10:38:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:33:16.584 Calculate MD5 checksum, iteration 1 00:33:16.584 10:38:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:33:16.584 10:38:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:16.584 10:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:16.584 10:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:16.584 10:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:16.584 10:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:16.584 10:38:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:16.584 [2024-11-25 10:38:23.485323] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:33:16.584 [2024-11-25 10:38:23.485460] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83490 ] 00:33:16.584 [2024-11-25 10:38:23.664469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.843 [2024-11-25 10:38:23.810338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.221  [2024-11-25T10:38:26.272Z] Copying: 654/1024 [MB] (654 MBps) [2024-11-25T10:38:27.211Z] Copying: 1024/1024 [MB] (average 619 MBps) 00:33:20.099 00:33:20.099 10:38:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:33:20.099 10:38:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e9ec13e5454e7859039a5a797b33716a 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:22.004 Fill FTL, iteration 2 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:22.004 10:38:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:22.004 [2024-11-25 10:38:28.766155] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:33:22.004 [2024-11-25 10:38:28.766272] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83547 ] 00:33:22.004 [2024-11-25 10:38:28.946615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.004 [2024-11-25 10:38:29.104511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.933  [2024-11-25T10:38:31.982Z] Copying: 244/1024 [MB] (244 MBps) [2024-11-25T10:38:32.918Z] Copying: 487/1024 [MB] (243 MBps) [2024-11-25T10:38:33.853Z] Copying: 733/1024 [MB] (246 MBps) [2024-11-25T10:38:33.853Z] Copying: 977/1024 [MB] (244 MBps) [2024-11-25T10:38:35.227Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:33:28.115 00:33:28.115 10:38:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:33:28.115 Calculate MD5 checksum, iteration 2 00:33:28.115 10:38:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:33:28.115 10:38:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:28.115 10:38:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:28.115 10:38:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:28.115 10:38:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:28.115 10:38:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:28.115 10:38:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:28.115 [2024-11-25 10:38:35.193538] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:33:28.115 [2024-11-25 10:38:35.193662] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83617 ] 00:33:28.374 [2024-11-25 10:38:35.376099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.633 [2024-11-25 10:38:35.522794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.538  [2024-11-25T10:38:37.909Z] Copying: 650/1024 [MB] (650 MBps) [2024-11-25T10:38:39.290Z] Copying: 1024/1024 [MB] (average 646 MBps) 00:33:32.178 00:33:32.178 10:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:33:32.178 10:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:34.082 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:34.082 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9eea0a2923462a6ba6dc3e56072254f9 00:33:34.082 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:34.082 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:34.082 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:34.341 [2024-11-25 10:38:41.218986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.341 [2024-11-25 10:38:41.219054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:34.341 [2024-11-25 10:38:41.219087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:34.341 [2024-11-25 10:38:41.219097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.341 [2024-11-25 10:38:41.219124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.341 [2024-11-25 10:38:41.219135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:34.341 [2024-11-25 10:38:41.219150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:34.341 [2024-11-25 10:38:41.219160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.341 [2024-11-25 10:38:41.219181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.341 [2024-11-25 10:38:41.219192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:34.341 [2024-11-25 10:38:41.219203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:34.341 [2024-11-25 10:38:41.219213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.341 [2024-11-25 10:38:41.219273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.279 ms, result 0 00:33:34.341 true 00:33:34.341 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:34.341 { 00:33:34.341 "name": "ftl", 00:33:34.341 "properties": [ 00:33:34.341 { 00:33:34.341 "name": "superblock_version", 00:33:34.341 "value": 5, 00:33:34.341 "read-only": true 00:33:34.341 }, 00:33:34.341 { 00:33:34.341 "name": "base_device", 00:33:34.341 "bands": [ 00:33:34.341 { 00:33:34.341 "id": 0, 00:33:34.341 "state": "FREE", 00:33:34.341 "validity": 0.0 00:33:34.341 }, 00:33:34.341 { 00:33:34.341 "id": 1, 00:33:34.341 "state": "FREE", 00:33:34.341 "validity": 0.0 00:33:34.341 }, 00:33:34.341 { 00:33:34.341 "id": 2, 00:33:34.341 "state": "FREE", 00:33:34.341 "validity": 0.0 00:33:34.341 }, 00:33:34.341 { 00:33:34.341 "id": 3, 00:33:34.341 "state": "FREE", 00:33:34.341 "validity": 0.0 00:33:34.341 }, 00:33:34.341 { 00:33:34.341 "id": 4, 00:33:34.341 "state": "FREE", 00:33:34.341 "validity": 0.0 00:33:34.341 }, 00:33:34.341 { 00:33:34.341 "id": 5, 00:33:34.341 "state": "FREE", 00:33:34.341 "validity": 0.0 00:33:34.341 }, 00:33:34.341 { 00:33:34.341 "id": 6, 00:33:34.341 "state": "FREE", 00:33:34.341 "validity": 0.0 00:33:34.341 }, 00:33:34.341 { 00:33:34.341 "id": 7, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 8, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 9, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 10, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 11, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 12, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 13, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 14, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 15, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 16, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 17, 00:33:34.342 "state": "FREE", 00:33:34.342 "validity": 0.0 00:33:34.342 } 00:33:34.342 ], 00:33:34.342 "read-only": true 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "name": "cache_device", 00:33:34.342 "type": "bdev", 00:33:34.342 "chunks": [ 00:33:34.342 { 00:33:34.342 "id": 0, 00:33:34.342 "state": "INACTIVE", 00:33:34.342 "utilization": 0.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 1, 00:33:34.342 "state": "CLOSED", 00:33:34.342 "utilization": 1.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 2, 00:33:34.342 "state": "CLOSED", 00:33:34.342 "utilization": 1.0 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 3, 00:33:34.342 "state": "OPEN", 00:33:34.342 "utilization": 0.001953125 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "id": 4, 00:33:34.342 "state": "OPEN", 00:33:34.342 "utilization": 0.0 00:33:34.342 } 00:33:34.342 ], 00:33:34.342 "read-only": true 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "name": "verbose_mode", 00:33:34.342 "value": true, 00:33:34.342 "unit": "", 00:33:34.342 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:34.342 }, 00:33:34.342 { 00:33:34.342 "name": "prep_upgrade_on_shutdown", 00:33:34.342 "value": false, 00:33:34.342 "unit": "", 00:33:34.342 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:34.342 } 00:33:34.342 ] 00:33:34.342 } 00:33:34.342 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:33:34.601 [2024-11-25 10:38:41.638703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.601 [2024-11-25 10:38:41.638754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:34.601 [2024-11-25 10:38:41.638769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:34.601 [2024-11-25 10:38:41.638780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.601 [2024-11-25 10:38:41.638806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.601 [2024-11-25 10:38:41.638818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:34.601 [2024-11-25 10:38:41.638828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:34.601 [2024-11-25 10:38:41.638838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.601 [2024-11-25 10:38:41.638858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:34.601 [2024-11-25 10:38:41.638869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:34.601 [2024-11-25 10:38:41.638879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:34.601 [2024-11-25 10:38:41.638889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:34.601 [2024-11-25 10:38:41.638946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.239 ms, result 0 00:33:34.601 true 00:33:34.601 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:34.601 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:33:34.601 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:34.860 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:33:34.860 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:33:34.860 10:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:35.120 [2024-11-25 10:38:42.066685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.120 [2024-11-25 10:38:42.066742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:35.120 [2024-11-25 10:38:42.066757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:35.120 [2024-11-25 10:38:42.066768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.120 [2024-11-25 10:38:42.066793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.120 [2024-11-25 10:38:42.066804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:35.120 [2024-11-25 10:38:42.066815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:35.120 [2024-11-25 10:38:42.066824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.120 [2024-11-25 10:38:42.066845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:35.120 [2024-11-25 10:38:42.066856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:35.120 [2024-11-25 10:38:42.066866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:35.120 [2024-11-25 10:38:42.066876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:35.120 [2024-11-25 10:38:42.066934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.240 ms, result 0 00:33:35.120 true 00:33:35.120 10:38:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:35.379 { 00:33:35.379 "name": "ftl", 00:33:35.379 "properties": [ 00:33:35.379 { 00:33:35.379 "name": "superblock_version", 00:33:35.379 "value": 5, 00:33:35.379 "read-only": true 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "name": "base_device", 00:33:35.379 "bands": [ 00:33:35.379 { 00:33:35.379 "id": 0, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 1, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 2, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 3, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 4, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 5, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 6, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 7, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 8, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 9, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 10, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 11, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 12, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 13, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 14, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 15, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 16, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 17, 00:33:35.379 "state": "FREE", 00:33:35.379 "validity": 0.0 00:33:35.379 } 00:33:35.379 ], 00:33:35.379 "read-only": true 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "name": "cache_device", 00:33:35.379 "type": "bdev", 00:33:35.379 "chunks": [ 00:33:35.379 { 00:33:35.379 "id": 0, 00:33:35.379 "state": "INACTIVE", 00:33:35.379 "utilization": 0.0 00:33:35.379 }, 00:33:35.379 { 00:33:35.379 "id": 1, 00:33:35.379 "state": "CLOSED", 00:33:35.380 "utilization": 1.0 00:33:35.380 }, 00:33:35.380 { 00:33:35.380 "id": 2, 00:33:35.380 "state": "CLOSED", 00:33:35.380 "utilization": 1.0 00:33:35.380 }, 00:33:35.380 { 00:33:35.380 "id": 3, 00:33:35.380 "state": "OPEN", 00:33:35.380 "utilization": 0.001953125 00:33:35.380 }, 00:33:35.380 { 00:33:35.380 "id": 4, 00:33:35.380 "state": "OPEN", 00:33:35.380 "utilization": 0.0 00:33:35.380 } 00:33:35.380 ], 00:33:35.380 "read-only": true 00:33:35.380 }, 00:33:35.380 { 00:33:35.380 "name": "verbose_mode", 00:33:35.380 "value": true, 00:33:35.380 "unit": "", 00:33:35.380 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:35.380 }, 00:33:35.380 { 00:33:35.380 "name": "prep_upgrade_on_shutdown", 00:33:35.380 "value": true, 00:33:35.380 "unit": "", 00:33:35.380 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:35.380 } 00:33:35.380 ] 00:33:35.380 } 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83244 ]] 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83244 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83244 ']' 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83244 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83244 00:33:35.380 killing process with pid 83244 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83244' 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83244 00:33:35.380 10:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83244 00:33:36.760 [2024-11-25 10:38:43.451190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:36.760 [2024-11-25 10:38:43.471966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.760 [2024-11-25 10:38:43.472019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:36.760 [2024-11-25 10:38:43.472035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:36.760 [2024-11-25 10:38:43.472046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:36.760 [2024-11-25 10:38:43.472069] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:36.760 [2024-11-25 10:38:43.476196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:36.760 [2024-11-25 10:38:43.476234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:36.760 [2024-11-25 10:38:43.476247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.116 ms 00:33:36.760 [2024-11-25 10:38:43.476258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.882 [2024-11-25 10:38:50.742897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.882 [2024-11-25 10:38:50.742968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:44.882 [2024-11-25 10:38:50.742991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7278.396 ms 00:33:44.882 [2024-11-25 10:38:50.743002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.882 [2024-11-25 10:38:50.744098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.882 [2024-11-25 10:38:50.744131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:44.882 [2024-11-25 10:38:50.744144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.080 ms 00:33:44.883 [2024-11-25 10:38:50.744154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.745080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.745102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:44.883 [2024-11-25 10:38:50.745115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.897 ms 00:33:44.883 [2024-11-25 10:38:50.745132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.760360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.760413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:44.883 [2024-11-25 10:38:50.760430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.209 ms 00:33:44.883 [2024-11-25 10:38:50.760444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.769702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.769746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:44.883 [2024-11-25 10:38:50.769760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.228 ms 00:33:44.883 [2024-11-25 10:38:50.769771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.769855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.769869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:44.883 [2024-11-25 10:38:50.769887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:33:44.883 [2024-11-25 10:38:50.769897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.784623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.784664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:44.883 [2024-11-25 10:38:50.784677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.731 ms 00:33:44.883 [2024-11-25 10:38:50.784688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.799869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.799910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:44.883 [2024-11-25 10:38:50.799922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.168 ms 00:33:44.883 [2024-11-25 10:38:50.799932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.814665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.814706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:44.883 [2024-11-25 10:38:50.814719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.719 ms 00:33:44.883 [2024-11-25 10:38:50.814728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.829044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.829083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:44.883 [2024-11-25 10:38:50.829096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.259 ms 00:33:44.883 [2024-11-25 10:38:50.829106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.829141] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:44.883 [2024-11-25 10:38:50.829170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:44.883 [2024-11-25 10:38:50.829183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:44.883 [2024-11-25 10:38:50.829194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:44.883 [2024-11-25 10:38:50.829205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:44.883 [2024-11-25 10:38:50.829365] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:44.883 [2024-11-25 10:38:50.829375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2e9c3c50-b3cb-4afd-b40e-e431bf764b06 00:33:44.883 [2024-11-25 10:38:50.829385] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:44.883 [2024-11-25 10:38:50.829395] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:44.883 [2024-11-25 10:38:50.829405] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:44.883 [2024-11-25 10:38:50.829415] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:44.883 [2024-11-25 10:38:50.829439] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:44.883 [2024-11-25 10:38:50.829450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:44.883 [2024-11-25 10:38:50.829463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:44.883 [2024-11-25 10:38:50.829472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:44.883 [2024-11-25 10:38:50.829482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:44.883 [2024-11-25 10:38:50.829505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.829516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:44.883 [2024-11-25 10:38:50.829526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:33:44.883 [2024-11-25 10:38:50.829536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.849868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.849907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:44.883 [2024-11-25 10:38:50.849926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.332 ms 00:33:44.883 [2024-11-25 10:38:50.849937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.850405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.883 [2024-11-25 10:38:50.850422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:44.883 [2024-11-25 10:38:50.850433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.446 ms 00:33:44.883 [2024-11-25 10:38:50.850443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.916083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.883 [2024-11-25 10:38:50.916130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:44.883 [2024-11-25 10:38:50.916149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.883 [2024-11-25 10:38:50.916160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.883 [2024-11-25 10:38:50.916194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.883 [2024-11-25 10:38:50.916205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:44.883 [2024-11-25 10:38:50.916216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.883 [2024-11-25 10:38:50.916227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:50.916320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.884 [2024-11-25 10:38:50.916335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:44.884 [2024-11-25 10:38:50.916346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.884 [2024-11-25 10:38:50.916361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:50.916380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.884 [2024-11-25 10:38:50.916391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:44.884 [2024-11-25 10:38:50.916401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.884 [2024-11-25 10:38:50.916411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:51.040692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.884 [2024-11-25 10:38:51.040777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:44.884 [2024-11-25 10:38:51.040800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.884 [2024-11-25 10:38:51.040811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:51.140397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.884 [2024-11-25 10:38:51.140461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:44.884 [2024-11-25 10:38:51.140492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.884 [2024-11-25 10:38:51.140503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:51.140632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.884 [2024-11-25 10:38:51.140647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:44.884 [2024-11-25 10:38:51.140659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.884 [2024-11-25 10:38:51.140670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:51.140729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.884 [2024-11-25 10:38:51.140741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:44.884 [2024-11-25 10:38:51.140752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.884 [2024-11-25 10:38:51.140762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:51.140871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.884 [2024-11-25 10:38:51.140885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:44.884 [2024-11-25 10:38:51.140895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.884 [2024-11-25 10:38:51.140906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:51.140945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.884 [2024-11-25 10:38:51.140958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:44.884 [2024-11-25 10:38:51.140968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.884 [2024-11-25 10:38:51.140978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:51.141017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.884 [2024-11-25 10:38:51.141028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:44.884 [2024-11-25 10:38:51.141039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.884 [2024-11-25 10:38:51.141049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:51.141096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.884 [2024-11-25 10:38:51.141109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:44.884 [2024-11-25 10:38:51.141119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.884 [2024-11-25 10:38:51.141130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.884 [2024-11-25 10:38:51.141250] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7681.707 ms, result 0 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83814 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83814 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83814 ']' 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.172 10:38:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:48.431 [2024-11-25 10:38:55.308836] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:33:48.431 [2024-11-25 10:38:55.308967] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83814 ] 00:33:48.431 [2024-11-25 10:38:55.491475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.690 [2024-11-25 10:38:55.610691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.628 [2024-11-25 10:38:56.562392] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:49.628 [2024-11-25 10:38:56.562472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:49.628 [2024-11-25 10:38:56.709649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.628 [2024-11-25 10:38:56.709708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:49.628 [2024-11-25 10:38:56.709723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:49.628 [2024-11-25 10:38:56.709734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.628 [2024-11-25 10:38:56.709796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.628 [2024-11-25 10:38:56.709808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:49.628 [2024-11-25 10:38:56.709819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:33:49.628 [2024-11-25 10:38:56.709830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.628 [2024-11-25 10:38:56.709860] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:49.628 [2024-11-25 10:38:56.710832] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:49.628 [2024-11-25 10:38:56.710857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.628 [2024-11-25 10:38:56.710868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:49.628 [2024-11-25 10:38:56.710879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.010 ms 00:33:49.628 [2024-11-25 10:38:56.710889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.628 [2024-11-25 10:38:56.712358] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:49.628 [2024-11-25 10:38:56.732559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.628 [2024-11-25 10:38:56.732615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:49.628 [2024-11-25 10:38:56.732631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.233 ms 00:33:49.628 [2024-11-25 10:38:56.732641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.628 [2024-11-25 10:38:56.732719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.628 [2024-11-25 10:38:56.732732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:49.628 [2024-11-25 10:38:56.732743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:33:49.628 [2024-11-25 10:38:56.732753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.889 [2024-11-25 10:38:56.740008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.889 [2024-11-25 10:38:56.740050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:49.889 [2024-11-25 10:38:56.740062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.175 ms 00:33:49.889 [2024-11-25 10:38:56.740073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.889 [2024-11-25 10:38:56.740141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.889 [2024-11-25 10:38:56.740155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:49.889 [2024-11-25 10:38:56.740165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:33:49.889 [2024-11-25 10:38:56.740176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.889 [2024-11-25 10:38:56.740221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.889 [2024-11-25 10:38:56.740237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:49.889 [2024-11-25 10:38:56.740248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:49.889 [2024-11-25 10:38:56.740258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.889 [2024-11-25 10:38:56.740287] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:49.889 [2024-11-25 10:38:56.745283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.889 [2024-11-25 10:38:56.745316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:49.889 [2024-11-25 10:38:56.745397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.011 ms 00:33:49.889 [2024-11-25 10:38:56.745407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.889 [2024-11-25 10:38:56.745444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.889 [2024-11-25 10:38:56.745455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:49.889 [2024-11-25 10:38:56.745466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:49.889 [2024-11-25 10:38:56.745476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.889 [2024-11-25 10:38:56.745552] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:49.889 [2024-11-25 10:38:56.745581] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:49.889 [2024-11-25 10:38:56.745617] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:49.889 [2024-11-25 10:38:56.745636] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:49.889 [2024-11-25 10:38:56.745724] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:49.889 [2024-11-25 10:38:56.745737] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:49.889 [2024-11-25 10:38:56.745750] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:49.889 [2024-11-25 10:38:56.745762] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:49.889 [2024-11-25 10:38:56.745777] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:49.889 [2024-11-25 10:38:56.745789] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:49.889 [2024-11-25 10:38:56.745798] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:49.889 [2024-11-25 10:38:56.745808] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:49.889 [2024-11-25 10:38:56.745818] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:49.889 [2024-11-25 10:38:56.745829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.889 [2024-11-25 10:38:56.745838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:49.889 [2024-11-25 10:38:56.745849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:33:49.889 [2024-11-25 10:38:56.745859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.889 [2024-11-25 10:38:56.745936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.889 [2024-11-25 10:38:56.745946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:49.889 [2024-11-25 10:38:56.745960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:33:49.889 [2024-11-25 10:38:56.745970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.889 [2024-11-25 10:38:56.746060] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:49.889 [2024-11-25 10:38:56.746072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:49.889 [2024-11-25 10:38:56.746083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:49.889 [2024-11-25 10:38:56.746094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.889 [2024-11-25 10:38:56.746104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:49.889 [2024-11-25 10:38:56.746113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:49.889 [2024-11-25 10:38:56.746122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:49.889 [2024-11-25 10:38:56.746131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:49.889 [2024-11-25 10:38:56.746140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:49.889 [2024-11-25 10:38:56.746150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.889 [2024-11-25 10:38:56.746161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:49.889 [2024-11-25 10:38:56.746171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:49.890 [2024-11-25 10:38:56.746180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.890 [2024-11-25 10:38:56.746190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:49.890 [2024-11-25 10:38:56.746199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:49.890 [2024-11-25 10:38:56.746209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.890 [2024-11-25 10:38:56.746218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:49.890 [2024-11-25 10:38:56.746227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:49.890 [2024-11-25 10:38:56.746235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.890 [2024-11-25 10:38:56.746245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:49.890 [2024-11-25 10:38:56.746254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:49.890 [2024-11-25 10:38:56.746264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:49.890 [2024-11-25 10:38:56.746273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:49.890 [2024-11-25 10:38:56.746295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:49.890 [2024-11-25 10:38:56.746304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:49.890 [2024-11-25 10:38:56.746313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:49.890 [2024-11-25 10:38:56.746322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:49.890 [2024-11-25 10:38:56.746332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:49.890 [2024-11-25 10:38:56.746341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:49.890 [2024-11-25 10:38:56.746350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:49.890 [2024-11-25 10:38:56.746359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:49.890 [2024-11-25 10:38:56.746368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:49.890 [2024-11-25 10:38:56.746377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:49.890 [2024-11-25 10:38:56.746386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.890 [2024-11-25 10:38:56.746395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:49.890 [2024-11-25 10:38:56.746405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:49.890 [2024-11-25 10:38:56.746414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.890 [2024-11-25 10:38:56.746423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:49.890 [2024-11-25 10:38:56.746432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:49.890 [2024-11-25 10:38:56.746441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.890 [2024-11-25 10:38:56.746450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:49.890 [2024-11-25 10:38:56.746459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:49.890 [2024-11-25 10:38:56.746469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.890 [2024-11-25 10:38:56.746477] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:49.890 [2024-11-25 10:38:56.746487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:49.890 [2024-11-25 10:38:56.746510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:49.890 [2024-11-25 10:38:56.746523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:49.890 [2024-11-25 10:38:56.746534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:49.890 [2024-11-25 10:38:56.746543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:49.890 [2024-11-25 10:38:56.746552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:49.890 [2024-11-25 10:38:56.746562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:49.890 [2024-11-25 10:38:56.746572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:49.890 [2024-11-25 10:38:56.746581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:49.890 [2024-11-25 10:38:56.746591] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:49.890 [2024-11-25 10:38:56.746603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:49.890 [2024-11-25 10:38:56.746625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:49.890 [2024-11-25 10:38:56.746656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:49.890 [2024-11-25 10:38:56.746666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:49.890 [2024-11-25 10:38:56.746676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:49.890 [2024-11-25 10:38:56.746687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:49.890 [2024-11-25 10:38:56.746758] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:49.890 [2024-11-25 10:38:56.746769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:49.890 [2024-11-25 10:38:56.746791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:49.890 [2024-11-25 10:38:56.746802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:49.890 [2024-11-25 10:38:56.746813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:49.890 [2024-11-25 10:38:56.746824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.890 [2024-11-25 10:38:56.746834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:49.890 [2024-11-25 10:38:56.746844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.819 ms 00:33:49.890 [2024-11-25 10:38:56.746854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.890 [2024-11-25 10:38:56.746901] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:49.890 [2024-11-25 10:38:56.746917] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:54.085 [2024-11-25 10:39:00.646834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.085 [2024-11-25 10:39:00.646900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:54.085 [2024-11-25 10:39:00.646917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3906.260 ms 00:33:54.085 [2024-11-25 10:39:00.646944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.085 [2024-11-25 10:39:00.686854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.085 [2024-11-25 10:39:00.686915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:54.085 [2024-11-25 10:39:00.686931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.586 ms 00:33:54.085 [2024-11-25 10:39:00.686943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.085 [2024-11-25 10:39:00.687073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.085 [2024-11-25 10:39:00.687087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:54.085 [2024-11-25 10:39:00.687099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:54.085 [2024-11-25 10:39:00.687110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.085 [2024-11-25 10:39:00.732839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.085 [2024-11-25 10:39:00.732898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:54.085 [2024-11-25 10:39:00.732916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.756 ms 00:33:54.085 [2024-11-25 10:39:00.732927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.732977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.732988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:54.086 [2024-11-25 10:39:00.732999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:54.086 [2024-11-25 10:39:00.733010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.733540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.733555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:54.086 [2024-11-25 10:39:00.733567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.436 ms 00:33:54.086 [2024-11-25 10:39:00.733581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.733629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.733641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:54.086 [2024-11-25 10:39:00.733651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:33:54.086 [2024-11-25 10:39:00.733661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.753792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.753844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:54.086 [2024-11-25 10:39:00.753858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.139 ms 00:33:54.086 [2024-11-25 10:39:00.753869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.772823] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:54.086 [2024-11-25 10:39:00.772869] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:54.086 [2024-11-25 10:39:00.772885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.772896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:54.086 [2024-11-25 10:39:00.772908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.899 ms 00:33:54.086 [2024-11-25 10:39:00.772918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.793312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.793360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:54.086 [2024-11-25 10:39:00.793374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.378 ms 00:33:54.086 [2024-11-25 10:39:00.793385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.811943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.811988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:54.086 [2024-11-25 10:39:00.812002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.531 ms 00:33:54.086 [2024-11-25 10:39:00.812012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.830489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.830543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:54.086 [2024-11-25 10:39:00.830556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.461 ms 00:33:54.086 [2024-11-25 10:39:00.830566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.831410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.831440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:54.086 [2024-11-25 10:39:00.831452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.730 ms 00:33:54.086 [2024-11-25 10:39:00.831462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.931575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.931647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:54.086 [2024-11-25 10:39:00.931664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.250 ms 00:33:54.086 [2024-11-25 10:39:00.931675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.943340] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:54.086 [2024-11-25 10:39:00.944411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.944442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:54.086 [2024-11-25 10:39:00.944456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.686 ms 00:33:54.086 [2024-11-25 10:39:00.944467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.944590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.944608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:54.086 [2024-11-25 10:39:00.944620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:54.086 [2024-11-25 10:39:00.944630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.944691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.944704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:54.086 [2024-11-25 10:39:00.944714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:54.086 [2024-11-25 10:39:00.944723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.944746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.944757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:54.086 [2024-11-25 10:39:00.944771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:54.086 [2024-11-25 10:39:00.944781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.944818] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:54.086 [2024-11-25 10:39:00.944830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.944840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:54.086 [2024-11-25 10:39:00.944850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:54.086 [2024-11-25 10:39:00.944860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.981561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.981622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:54.086 [2024-11-25 10:39:00.981638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.738 ms 00:33:54.086 [2024-11-25 10:39:00.981648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.981734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.086 [2024-11-25 10:39:00.981747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:54.086 [2024-11-25 10:39:00.981758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:33:54.086 [2024-11-25 10:39:00.981768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.086 [2024-11-25 10:39:00.982930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4279.725 ms, result 0 00:33:54.086 [2024-11-25 10:39:00.997920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.086 [2024-11-25 10:39:01.013912] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:54.086 [2024-11-25 10:39:01.023471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:54.345 10:39:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.345 10:39:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:54.345 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:54.345 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:54.345 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:54.605 [2024-11-25 10:39:01.494965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.605 [2024-11-25 10:39:01.495024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:54.605 [2024-11-25 10:39:01.495040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:54.605 [2024-11-25 10:39:01.495057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.605 [2024-11-25 10:39:01.495082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.605 [2024-11-25 10:39:01.495094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:54.605 [2024-11-25 10:39:01.495105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:54.605 [2024-11-25 10:39:01.495115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.605 [2024-11-25 10:39:01.495136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:54.605 [2024-11-25 10:39:01.495147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:54.605 [2024-11-25 10:39:01.495157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:54.605 [2024-11-25 10:39:01.495167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:54.605 [2024-11-25 10:39:01.495228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.254 ms, result 0 00:33:54.605 true 00:33:54.605 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:54.605 { 00:33:54.605 "name": "ftl", 00:33:54.605 "properties": [ 00:33:54.605 { 00:33:54.605 "name": "superblock_version", 00:33:54.605 "value": 5, 00:33:54.605 "read-only": true 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "name": "base_device", 00:33:54.605 "bands": [ 00:33:54.605 { 00:33:54.605 "id": 0, 00:33:54.605 "state": "CLOSED", 00:33:54.605 "validity": 1.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 1, 00:33:54.605 "state": "CLOSED", 00:33:54.605 "validity": 1.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 2, 00:33:54.605 "state": "CLOSED", 00:33:54.605 "validity": 0.007843137254901933 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 3, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 4, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 5, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 6, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 7, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 8, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 9, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 10, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 11, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 12, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 13, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 14, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 15, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 16, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 17, 00:33:54.605 "state": "FREE", 00:33:54.605 "validity": 0.0 00:33:54.605 } 00:33:54.605 ], 00:33:54.605 "read-only": true 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "name": "cache_device", 00:33:54.605 "type": "bdev", 00:33:54.605 "chunks": [ 00:33:54.605 { 00:33:54.605 "id": 0, 00:33:54.605 "state": "INACTIVE", 00:33:54.605 "utilization": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 1, 00:33:54.605 "state": "OPEN", 00:33:54.605 "utilization": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 2, 00:33:54.605 "state": "OPEN", 00:33:54.605 "utilization": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 3, 00:33:54.605 "state": "FREE", 00:33:54.605 "utilization": 0.0 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "id": 4, 00:33:54.605 "state": "FREE", 00:33:54.605 "utilization": 0.0 00:33:54.605 } 00:33:54.605 ], 00:33:54.605 "read-only": true 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "name": "verbose_mode", 00:33:54.605 "value": true, 00:33:54.605 "unit": "", 00:33:54.605 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:54.605 }, 00:33:54.605 { 00:33:54.605 "name": "prep_upgrade_on_shutdown", 00:33:54.605 "value": false, 00:33:54.605 "unit": "", 00:33:54.605 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:54.605 } 00:33:54.605 ] 00:33:54.605 } 00:33:54.606 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:54.606 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:54.606 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:54.865 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:54.865 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:54.865 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:54.865 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:54.865 10:39:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:55.124 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:55.124 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:55.124 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:55.124 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:55.124 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:55.125 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:55.125 Validate MD5 checksum, iteration 1 00:33:55.125 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:55.125 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:55.125 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:55.125 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:55.125 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:55.125 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:55.125 10:39:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:55.125 [2024-11-25 10:39:02.216167] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:33:55.125 [2024-11-25 10:39:02.216292] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83900 ] 00:33:55.384 [2024-11-25 10:39:02.397328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.644 [2024-11-25 10:39:02.535889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.552  [2024-11-25T10:39:04.923Z] Copying: 637/1024 [MB] (637 MBps) [2024-11-25T10:39:06.826Z] Copying: 1024/1024 [MB] (average 631 MBps) 00:33:59.714 00:33:59.714 10:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:59.714 10:39:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:01.649 Validate MD5 checksum, iteration 2 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e9ec13e5454e7859039a5a797b33716a 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e9ec13e5454e7859039a5a797b33716a != \e\9\e\c\1\3\e\5\4\5\4\e\7\8\5\9\0\3\9\a\5\a\7\9\7\b\3\3\7\1\6\a ]] 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:01.649 10:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:01.649 [2024-11-25 10:39:08.328077] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:34:01.649 [2024-11-25 10:39:08.328401] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83967 ] 00:34:01.649 [2024-11-25 10:39:08.507846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.649 [2024-11-25 10:39:08.645943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.554  [2024-11-25T10:39:11.235Z] Copying: 638/1024 [MB] (638 MBps) [2024-11-25T10:39:13.770Z] Copying: 1024/1024 [MB] (average 637 MBps) 00:34:06.658 00:34:06.658 10:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:06.658 10:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:08.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9eea0a2923462a6ba6dc3e56072254f9 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9eea0a2923462a6ba6dc3e56072254f9 != \9\e\e\a\0\a\2\9\2\3\4\6\2\a\6\b\a\6\d\c\3\e\5\6\0\7\2\2\5\4\f\9 ]] 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83814 ]] 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83814 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84044 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84044 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84044 ']' 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:08.565 10:39:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:08.565 [2024-11-25 10:39:15.463944] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:34:08.565 [2024-11-25 10:39:15.464066] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84044 ] 00:34:08.565 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83814 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:34:08.565 [2024-11-25 10:39:15.643886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.825 [2024-11-25 10:39:15.759089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.764 [2024-11-25 10:39:16.712560] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:09.764 [2024-11-25 10:39:16.712624] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:09.764 [2024-11-25 10:39:16.859278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:09.764 [2024-11-25 10:39:16.859330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:09.764 [2024-11-25 10:39:16.859346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:09.764 [2024-11-25 10:39:16.859357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:09.764 [2024-11-25 10:39:16.859413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:09.764 [2024-11-25 10:39:16.859426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:09.764 [2024-11-25 10:39:16.859437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:34:09.764 [2024-11-25 10:39:16.859448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:09.764 [2024-11-25 10:39:16.859479] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:09.764 [2024-11-25 10:39:16.860517] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:09.764 [2024-11-25 10:39:16.860547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:09.764 [2024-11-25 10:39:16.860558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:09.764 [2024-11-25 10:39:16.860570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.082 ms 00:34:09.764 [2024-11-25 10:39:16.860581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:09.764 [2024-11-25 10:39:16.861103] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:34:10.025 [2024-11-25 10:39:16.885054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.025 [2024-11-25 10:39:16.885097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:34:10.025 [2024-11-25 10:39:16.885113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.990 ms 00:34:10.025 [2024-11-25 10:39:16.885125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.025 [2024-11-25 10:39:16.899142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.025 [2024-11-25 10:39:16.899186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:34:10.025 [2024-11-25 10:39:16.899199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:34:10.025 [2024-11-25 10:39:16.899210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.025 [2024-11-25 10:39:16.899709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.025 [2024-11-25 10:39:16.899731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:10.025 [2024-11-25 10:39:16.899743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.418 ms 00:34:10.025 [2024-11-25 10:39:16.899753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.025 [2024-11-25 10:39:16.899814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.025 [2024-11-25 10:39:16.899830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:10.025 [2024-11-25 10:39:16.899840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:34:10.025 [2024-11-25 10:39:16.899850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.025 [2024-11-25 10:39:16.899879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.025 [2024-11-25 10:39:16.899890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:10.025 [2024-11-25 10:39:16.899901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:34:10.025 [2024-11-25 10:39:16.899910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.025 [2024-11-25 10:39:16.899931] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:10.025 [2024-11-25 10:39:16.903989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.025 [2024-11-25 10:39:16.904023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:10.025 [2024-11-25 10:39:16.904035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.068 ms 00:34:10.025 [2024-11-25 10:39:16.904049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.025 [2024-11-25 10:39:16.904081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.025 [2024-11-25 10:39:16.904091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:10.025 [2024-11-25 10:39:16.904103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:10.025 [2024-11-25 10:39:16.904113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.025 [2024-11-25 10:39:16.904149] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:34:10.025 [2024-11-25 10:39:16.904172] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:34:10.025 [2024-11-25 10:39:16.904205] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:34:10.025 [2024-11-25 10:39:16.904225] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:34:10.025 [2024-11-25 10:39:16.904315] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:10.025 [2024-11-25 10:39:16.904329] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:10.025 [2024-11-25 10:39:16.904343] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:34:10.025 [2024-11-25 10:39:16.904356] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:10.025 [2024-11-25 10:39:16.904368] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:10.025 [2024-11-25 10:39:16.904379] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:10.025 [2024-11-25 10:39:16.904389] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:10.025 [2024-11-25 10:39:16.904399] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:10.025 [2024-11-25 10:39:16.904408] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:10.025 [2024-11-25 10:39:16.904421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.025 [2024-11-25 10:39:16.904431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:10.025 [2024-11-25 10:39:16.904442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:34:10.025 [2024-11-25 10:39:16.904452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.025 [2024-11-25 10:39:16.904537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.025 [2024-11-25 10:39:16.904549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:10.025 [2024-11-25 10:39:16.904560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:34:10.025 [2024-11-25 10:39:16.904569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.025 [2024-11-25 10:39:16.904658] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:10.025 [2024-11-25 10:39:16.904675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:10.025 [2024-11-25 10:39:16.904686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:10.025 [2024-11-25 10:39:16.904697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.025 [2024-11-25 10:39:16.904708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:10.025 [2024-11-25 10:39:16.904717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:10.025 [2024-11-25 10:39:16.904726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:10.025 [2024-11-25 10:39:16.904736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:10.025 [2024-11-25 10:39:16.904745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:10.025 [2024-11-25 10:39:16.904755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.025 [2024-11-25 10:39:16.904768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:10.025 [2024-11-25 10:39:16.904777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:10.025 [2024-11-25 10:39:16.904786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.025 [2024-11-25 10:39:16.904796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:10.025 [2024-11-25 10:39:16.904805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:10.025 [2024-11-25 10:39:16.904815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.025 [2024-11-25 10:39:16.904824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:10.025 [2024-11-25 10:39:16.904833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:10.025 [2024-11-25 10:39:16.904843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.025 [2024-11-25 10:39:16.904852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:10.025 [2024-11-25 10:39:16.904861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:10.025 [2024-11-25 10:39:16.904881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:10.025 [2024-11-25 10:39:16.904890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:10.025 [2024-11-25 10:39:16.904899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:10.025 [2024-11-25 10:39:16.904909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:10.025 [2024-11-25 10:39:16.904918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:10.025 [2024-11-25 10:39:16.904928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:10.025 [2024-11-25 10:39:16.904937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:10.025 [2024-11-25 10:39:16.904948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:10.025 [2024-11-25 10:39:16.904957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:10.025 [2024-11-25 10:39:16.904967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:10.025 [2024-11-25 10:39:16.904976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:10.025 [2024-11-25 10:39:16.904986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:10.026 [2024-11-25 10:39:16.904995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.026 [2024-11-25 10:39:16.905004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:10.026 [2024-11-25 10:39:16.905012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:10.026 [2024-11-25 10:39:16.905021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.026 [2024-11-25 10:39:16.905030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:10.026 [2024-11-25 10:39:16.905039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:10.026 [2024-11-25 10:39:16.905048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.026 [2024-11-25 10:39:16.905057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:10.026 [2024-11-25 10:39:16.905067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:10.026 [2024-11-25 10:39:16.905078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.026 [2024-11-25 10:39:16.905087] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:10.026 [2024-11-25 10:39:16.905097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:10.026 [2024-11-25 10:39:16.905106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:10.026 [2024-11-25 10:39:16.905117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:10.026 [2024-11-25 10:39:16.905127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:10.026 [2024-11-25 10:39:16.905137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:10.026 [2024-11-25 10:39:16.905147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:10.026 [2024-11-25 10:39:16.905157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:10.026 [2024-11-25 10:39:16.905166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:10.026 [2024-11-25 10:39:16.905175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:10.026 [2024-11-25 10:39:16.905186] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:10.026 [2024-11-25 10:39:16.905199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:10.026 [2024-11-25 10:39:16.905222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:10.026 [2024-11-25 10:39:16.905252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:10.026 [2024-11-25 10:39:16.905263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:10.026 [2024-11-25 10:39:16.905273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:10.026 [2024-11-25 10:39:16.905283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:10.026 [2024-11-25 10:39:16.905357] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:10.026 [2024-11-25 10:39:16.905369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:10.026 [2024-11-25 10:39:16.905395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:10.026 [2024-11-25 10:39:16.905406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:10.026 [2024-11-25 10:39:16.905418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:10.026 [2024-11-25 10:39:16.905440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.026 [2024-11-25 10:39:16.905450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:10.026 [2024-11-25 10:39:16.905460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.838 ms 00:34:10.026 [2024-11-25 10:39:16.905470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.026 [2024-11-25 10:39:16.941289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.026 [2024-11-25 10:39:16.941331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:10.026 [2024-11-25 10:39:16.941346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.815 ms 00:34:10.026 [2024-11-25 10:39:16.941357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.026 [2024-11-25 10:39:16.941399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.026 [2024-11-25 10:39:16.941410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:10.026 [2024-11-25 10:39:16.941422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:34:10.026 [2024-11-25 10:39:16.941439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.026 [2024-11-25 10:39:16.983546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.026 [2024-11-25 10:39:16.983591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:10.026 [2024-11-25 10:39:16.983605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.098 ms 00:34:10.026 [2024-11-25 10:39:16.983616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.026 [2024-11-25 10:39:16.983663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.026 [2024-11-25 10:39:16.983675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:10.026 [2024-11-25 10:39:16.983686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:10.026 [2024-11-25 10:39:16.983700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.026 [2024-11-25 10:39:16.983838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.026 [2024-11-25 10:39:16.983853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:10.026 [2024-11-25 10:39:16.983863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:34:10.026 [2024-11-25 10:39:16.983874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.026 [2024-11-25 10:39:16.983913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.026 [2024-11-25 10:39:16.983924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:10.026 [2024-11-25 10:39:16.983935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:34:10.026 [2024-11-25 10:39:16.983945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.026 [2024-11-25 10:39:17.004594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.026 [2024-11-25 10:39:17.004638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:10.027 [2024-11-25 10:39:17.004653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.656 ms 00:34:10.027 [2024-11-25 10:39:17.004668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.027 [2024-11-25 10:39:17.004791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.027 [2024-11-25 10:39:17.004807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:34:10.027 [2024-11-25 10:39:17.004819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:10.027 [2024-11-25 10:39:17.004830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.027 [2024-11-25 10:39:17.042690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.027 [2024-11-25 10:39:17.042736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:34:10.027 [2024-11-25 10:39:17.042751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.901 ms 00:34:10.027 [2024-11-25 10:39:17.042763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.027 [2024-11-25 10:39:17.057556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.027 [2024-11-25 10:39:17.057604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:10.027 [2024-11-25 10:39:17.057616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.671 ms 00:34:10.027 [2024-11-25 10:39:17.057627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.301 [2024-11-25 10:39:17.143637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.301 [2024-11-25 10:39:17.143709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:34:10.301 [2024-11-25 10:39:17.143726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.081 ms 00:34:10.301 [2024-11-25 10:39:17.143738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.301 [2024-11-25 10:39:17.143915] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:34:10.301 [2024-11-25 10:39:17.144036] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:34:10.301 [2024-11-25 10:39:17.144150] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:34:10.301 [2024-11-25 10:39:17.144261] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:34:10.301 [2024-11-25 10:39:17.144275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.301 [2024-11-25 10:39:17.144285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:34:10.301 [2024-11-25 10:39:17.144297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.482 ms 00:34:10.301 [2024-11-25 10:39:17.144307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.301 [2024-11-25 10:39:17.144404] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:34:10.301 [2024-11-25 10:39:17.144425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.301 [2024-11-25 10:39:17.144440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:34:10.301 [2024-11-25 10:39:17.144451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:34:10.301 [2024-11-25 10:39:17.144462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.301 [2024-11-25 10:39:17.167760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.301 [2024-11-25 10:39:17.167813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:34:10.301 [2024-11-25 10:39:17.167828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.306 ms 00:34:10.301 [2024-11-25 10:39:17.167840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.301 [2024-11-25 10:39:17.182141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.301 [2024-11-25 10:39:17.182182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:34:10.301 [2024-11-25 10:39:17.182195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:34:10.301 [2024-11-25 10:39:17.182206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.301 [2024-11-25 10:39:17.182304] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:34:10.301 [2024-11-25 10:39:17.182523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.301 [2024-11-25 10:39:17.182536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:34:10.301 [2024-11-25 10:39:17.182548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.220 ms 00:34:10.301 [2024-11-25 10:39:17.182558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.894 [2024-11-25 10:39:17.769830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.894 [2024-11-25 10:39:17.769898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:34:10.894 [2024-11-25 10:39:17.769917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 587.055 ms 00:34:10.894 [2024-11-25 10:39:17.769928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.894 [2024-11-25 10:39:17.775793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.894 [2024-11-25 10:39:17.775836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:34:10.894 [2024-11-25 10:39:17.775849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.269 ms 00:34:10.894 [2024-11-25 10:39:17.775868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.894 [2024-11-25 10:39:17.776404] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:34:10.894 [2024-11-25 10:39:17.776440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.894 [2024-11-25 10:39:17.776452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:34:10.894 [2024-11-25 10:39:17.776465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:34:10.894 [2024-11-25 10:39:17.776475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.894 [2024-11-25 10:39:17.776519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.894 [2024-11-25 10:39:17.776532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:34:10.894 [2024-11-25 10:39:17.776544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:10.894 [2024-11-25 10:39:17.776561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:10.894 [2024-11-25 10:39:17.776598] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 595.258 ms, result 0 00:34:10.894 [2024-11-25 10:39:17.776641] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:34:10.894 [2024-11-25 10:39:17.776713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:10.894 [2024-11-25 10:39:17.776724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:34:10.894 [2024-11-25 10:39:17.776734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:34:10.894 [2024-11-25 10:39:17.776743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.463 [2024-11-25 10:39:18.363163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.463 [2024-11-25 10:39:18.363232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:34:11.463 [2024-11-25 10:39:18.363271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 586.158 ms 00:34:11.463 [2024-11-25 10:39:18.363282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.463 [2024-11-25 10:39:18.368905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.463 [2024-11-25 10:39:18.368948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:34:11.463 [2024-11-25 10:39:18.368962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.987 ms 00:34:11.463 [2024-11-25 10:39:18.368972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.463 [2024-11-25 10:39:18.369705] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:34:11.463 [2024-11-25 10:39:18.369743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.463 [2024-11-25 10:39:18.369754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:34:11.463 [2024-11-25 10:39:18.369765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.740 ms 00:34:11.463 [2024-11-25 10:39:18.369776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.463 [2024-11-25 10:39:18.369808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.463 [2024-11-25 10:39:18.369821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:34:11.463 [2024-11-25 10:39:18.369832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:11.463 [2024-11-25 10:39:18.369841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.463 [2024-11-25 10:39:18.369879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 594.199 ms, result 0 00:34:11.463 [2024-11-25 10:39:18.369920] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:11.463 [2024-11-25 10:39:18.369933] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:34:11.463 [2024-11-25 10:39:18.369945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.463 [2024-11-25 10:39:18.369956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:34:11.463 [2024-11-25 10:39:18.369966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1189.588 ms 00:34:11.463 [2024-11-25 10:39:18.369976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.463 [2024-11-25 10:39:18.370006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.463 [2024-11-25 10:39:18.370022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:34:11.463 [2024-11-25 10:39:18.370032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:34:11.463 [2024-11-25 10:39:18.370053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.464 [2024-11-25 10:39:18.381619] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:11.464 [2024-11-25 10:39:18.381753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.464 [2024-11-25 10:39:18.381767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:11.464 [2024-11-25 10:39:18.381780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.701 ms 00:34:11.464 [2024-11-25 10:39:18.381790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.464 [2024-11-25 10:39:18.382362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.464 [2024-11-25 10:39:18.382385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:34:11.464 [2024-11-25 10:39:18.382401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.500 ms 00:34:11.464 [2024-11-25 10:39:18.382412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.464 [2024-11-25 10:39:18.384451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.464 [2024-11-25 10:39:18.384478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:34:11.464 [2024-11-25 10:39:18.384498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.023 ms 00:34:11.464 [2024-11-25 10:39:18.384510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.464 [2024-11-25 10:39:18.384552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.464 [2024-11-25 10:39:18.384564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:34:11.464 [2024-11-25 10:39:18.384575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:11.464 [2024-11-25 10:39:18.384590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.464 [2024-11-25 10:39:18.384686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.464 [2024-11-25 10:39:18.384699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:11.464 [2024-11-25 10:39:18.384709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:34:11.464 [2024-11-25 10:39:18.384719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.464 [2024-11-25 10:39:18.384741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.464 [2024-11-25 10:39:18.384752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:11.464 [2024-11-25 10:39:18.384762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:11.464 [2024-11-25 10:39:18.384773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.464 [2024-11-25 10:39:18.384804] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:34:11.464 [2024-11-25 10:39:18.384817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.464 [2024-11-25 10:39:18.384827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:34:11.464 [2024-11-25 10:39:18.384837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:34:11.464 [2024-11-25 10:39:18.384847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.464 [2024-11-25 10:39:18.384899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:11.464 [2024-11-25 10:39:18.384911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:11.464 [2024-11-25 10:39:18.384922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:34:11.464 [2024-11-25 10:39:18.384932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:11.464 [2024-11-25 10:39:18.385847] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1528.607 ms, result 0 00:34:11.464 [2024-11-25 10:39:18.398184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.464 [2024-11-25 10:39:18.414158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:11.464 [2024-11-25 10:39:18.423781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:11.464 Validate MD5 checksum, iteration 1 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:11.464 10:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:11.464 [2024-11-25 10:39:18.556710] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:34:11.464 [2024-11-25 10:39:18.556826] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84080 ] 00:34:11.723 [2024-11-25 10:39:18.735541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.983 [2024-11-25 10:39:18.877807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.910  [2024-11-25T10:39:21.281Z] Copying: 603/1024 [MB] (603 MBps) [2024-11-25T10:39:23.816Z] Copying: 1024/1024 [MB] (average 613 MBps) 00:34:16.705 00:34:16.705 10:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:34:16.705 10:39:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e9ec13e5454e7859039a5a797b33716a 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e9ec13e5454e7859039a5a797b33716a != \e\9\e\c\1\3\e\5\4\5\4\e\7\8\5\9\0\3\9\a\5\a\7\9\7\b\3\3\7\1\6\a ]] 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:18.624 Validate MD5 checksum, iteration 2 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:18.624 10:39:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:18.624 [2024-11-25 10:39:25.445781] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:34:18.624 [2024-11-25 10:39:25.445896] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84157 ] 00:34:18.624 [2024-11-25 10:39:25.624628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.883 [2024-11-25 10:39:25.760105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.793  [2024-11-25T10:39:28.163Z] Copying: 626/1024 [MB] (626 MBps) [2024-11-25T10:39:29.540Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:34:22.428 00:34:22.428 10:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:22.428 10:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9eea0a2923462a6ba6dc3e56072254f9 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9eea0a2923462a6ba6dc3e56072254f9 != \9\e\e\a\0\a\2\9\2\3\4\6\2\a\6\b\a\6\d\c\3\e\5\6\0\7\2\2\5\4\f\9 ]] 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84044 ]] 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84044 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84044 ']' 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84044 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84044 00:34:24.332 killing process with pid 84044 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84044' 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84044 00:34:24.332 10:39:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84044 00:34:25.710 [2024-11-25 10:39:32.464380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:34:25.710 [2024-11-25 10:39:32.483940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.483993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:34:25.710 [2024-11-25 10:39:32.484009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:25.710 [2024-11-25 10:39:32.484020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.484043] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:34:25.710 [2024-11-25 10:39:32.488129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.488163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:34:25.710 [2024-11-25 10:39:32.488181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.076 ms 00:34:25.710 [2024-11-25 10:39:32.488191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.488388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.488400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:34:25.710 [2024-11-25 10:39:32.488411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.175 ms 00:34:25.710 [2024-11-25 10:39:32.488421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.489563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.489599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:34:25.710 [2024-11-25 10:39:32.489612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.128 ms 00:34:25.710 [2024-11-25 10:39:32.489628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.490602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.490630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:34:25.710 [2024-11-25 10:39:32.490643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.944 ms 00:34:25.710 [2024-11-25 10:39:32.490653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.505809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.505849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:34:25.710 [2024-11-25 10:39:32.505862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.126 ms 00:34:25.710 [2024-11-25 10:39:32.505878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.514162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.514203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:34:25.710 [2024-11-25 10:39:32.514216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.259 ms 00:34:25.710 [2024-11-25 10:39:32.514226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.514326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.514340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:34:25.710 [2024-11-25 10:39:32.514351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:34:25.710 [2024-11-25 10:39:32.514367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.529226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.529263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:34:25.710 [2024-11-25 10:39:32.529276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.865 ms 00:34:25.710 [2024-11-25 10:39:32.529286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.544026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.544066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:34:25.710 [2024-11-25 10:39:32.544078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.728 ms 00:34:25.710 [2024-11-25 10:39:32.544088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.558446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.558486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:34:25.710 [2024-11-25 10:39:32.558507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.346 ms 00:34:25.710 [2024-11-25 10:39:32.558517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.572666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.710 [2024-11-25 10:39:32.572704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:34:25.710 [2024-11-25 10:39:32.572716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.099 ms 00:34:25.710 [2024-11-25 10:39:32.572726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.710 [2024-11-25 10:39:32.572760] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:34:25.710 [2024-11-25 10:39:32.572777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:25.710 [2024-11-25 10:39:32.572790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:34:25.710 [2024-11-25 10:39:32.572801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:34:25.710 [2024-11-25 10:39:32.572812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:25.710 [2024-11-25 10:39:32.572971] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:34:25.711 [2024-11-25 10:39:32.572981] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2e9c3c50-b3cb-4afd-b40e-e431bf764b06 00:34:25.711 [2024-11-25 10:39:32.572992] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:34:25.711 [2024-11-25 10:39:32.573001] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:34:25.711 [2024-11-25 10:39:32.573011] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:34:25.711 [2024-11-25 10:39:32.573021] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:34:25.711 [2024-11-25 10:39:32.573031] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:34:25.711 [2024-11-25 10:39:32.573041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:34:25.711 [2024-11-25 10:39:32.573051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:34:25.711 [2024-11-25 10:39:32.573060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:34:25.711 [2024-11-25 10:39:32.573068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:34:25.711 [2024-11-25 10:39:32.573078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.711 [2024-11-25 10:39:32.573094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:34:25.711 [2024-11-25 10:39:32.573106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.320 ms 00:34:25.711 [2024-11-25 10:39:32.573116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.711 [2024-11-25 10:39:32.593117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.711 [2024-11-25 10:39:32.593151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:34:25.711 [2024-11-25 10:39:32.593165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.001 ms 00:34:25.711 [2024-11-25 10:39:32.593175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.711 [2024-11-25 10:39:32.593766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.711 [2024-11-25 10:39:32.593785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:34:25.711 [2024-11-25 10:39:32.593796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.561 ms 00:34:25.711 [2024-11-25 10:39:32.593807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.711 [2024-11-25 10:39:32.659695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.711 [2024-11-25 10:39:32.659740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:25.711 [2024-11-25 10:39:32.659753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.711 [2024-11-25 10:39:32.659764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.711 [2024-11-25 10:39:32.659811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.711 [2024-11-25 10:39:32.659822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:25.711 [2024-11-25 10:39:32.659832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.711 [2024-11-25 10:39:32.659843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.711 [2024-11-25 10:39:32.659926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.711 [2024-11-25 10:39:32.659946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:25.711 [2024-11-25 10:39:32.659956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.711 [2024-11-25 10:39:32.659967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.711 [2024-11-25 10:39:32.659990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.711 [2024-11-25 10:39:32.660004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:25.711 [2024-11-25 10:39:32.660014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.711 [2024-11-25 10:39:32.660024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.711 [2024-11-25 10:39:32.784685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.711 [2024-11-25 10:39:32.784745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:25.711 [2024-11-25 10:39:32.784761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.711 [2024-11-25 10:39:32.784772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.969 [2024-11-25 10:39:32.884982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.969 [2024-11-25 10:39:32.885043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:25.969 [2024-11-25 10:39:32.885058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.969 [2024-11-25 10:39:32.885069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.969 [2024-11-25 10:39:32.885177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.969 [2024-11-25 10:39:32.885190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:25.969 [2024-11-25 10:39:32.885201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.969 [2024-11-25 10:39:32.885211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.969 [2024-11-25 10:39:32.885273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.969 [2024-11-25 10:39:32.885295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:25.969 [2024-11-25 10:39:32.885310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.969 [2024-11-25 10:39:32.885320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.969 [2024-11-25 10:39:32.885439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.969 [2024-11-25 10:39:32.885452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:25.969 [2024-11-25 10:39:32.885463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.969 [2024-11-25 10:39:32.885473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.969 [2024-11-25 10:39:32.885542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.969 [2024-11-25 10:39:32.885556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:34:25.969 [2024-11-25 10:39:32.885570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.969 [2024-11-25 10:39:32.885580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.969 [2024-11-25 10:39:32.885617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.969 [2024-11-25 10:39:32.885629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:25.969 [2024-11-25 10:39:32.885639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.969 [2024-11-25 10:39:32.885649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.970 [2024-11-25 10:39:32.885690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:25.970 [2024-11-25 10:39:32.885702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:25.970 [2024-11-25 10:39:32.885715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:25.970 [2024-11-25 10:39:32.885726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.970 [2024-11-25 10:39:32.885841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 402.523 ms, result 0 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:27.349 Remove shared memory files 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83814 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:27.349 00:34:27.349 real 1m30.078s 00:34:27.349 user 2m1.697s 00:34:27.349 sys 0m25.299s 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.349 10:39:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:27.349 ************************************ 00:34:27.349 END TEST ftl_upgrade_shutdown 00:34:27.349 ************************************ 00:34:27.349 10:39:34 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:34:27.350 10:39:34 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:27.350 10:39:34 ftl -- ftl/ftl.sh@14 -- # killprocess 76669 00:34:27.350 10:39:34 ftl -- common/autotest_common.sh@954 -- # '[' -z 76669 ']' 00:34:27.350 10:39:34 ftl -- common/autotest_common.sh@958 -- # kill -0 76669 00:34:27.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76669) - No such process 00:34:27.350 Process with pid 76669 is not found 00:34:27.350 10:39:34 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76669 is not found' 00:34:27.350 10:39:34 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:27.350 10:39:34 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84288 00:34:27.350 10:39:34 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:27.350 10:39:34 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84288 00:34:27.350 10:39:34 ftl -- common/autotest_common.sh@835 -- # '[' -z 84288 ']' 00:34:27.350 10:39:34 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.350 10:39:34 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:27.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.350 10:39:34 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.350 10:39:34 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:27.350 10:39:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:27.350 [2024-11-25 10:39:34.335447] Starting SPDK v25.01-pre git sha1 eb055bb93 / DPDK 24.03.0 initialization... 00:34:27.350 [2024-11-25 10:39:34.335584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84288 ] 00:34:27.609 [2024-11-25 10:39:34.514739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.609 [2024-11-25 10:39:34.624401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.547 10:39:35 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.547 10:39:35 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:28.547 10:39:35 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:28.806 nvme0n1 00:34:28.806 10:39:35 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:28.806 10:39:35 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:28.806 10:39:35 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:29.065 10:39:35 ftl -- ftl/common.sh@28 -- # stores=dc8293f1-9e1e-4a89-a88d-54257ae64aa9 00:34:29.065 10:39:35 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:29.065 10:39:35 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc8293f1-9e1e-4a89-a88d-54257ae64aa9 00:34:29.065 10:39:36 ftl -- ftl/ftl.sh@23 -- # killprocess 84288 00:34:29.065 10:39:36 ftl -- common/autotest_common.sh@954 -- # '[' -z 84288 ']' 00:34:29.066 10:39:36 ftl -- common/autotest_common.sh@958 -- # kill -0 84288 00:34:29.066 10:39:36 ftl -- common/autotest_common.sh@959 -- # uname 00:34:29.066 10:39:36 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.066 10:39:36 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84288 00:34:29.326 10:39:36 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:29.326 10:39:36 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:29.326 killing process with pid 84288 00:34:29.326 10:39:36 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84288' 00:34:29.326 10:39:36 ftl -- common/autotest_common.sh@973 -- # kill 84288 00:34:29.326 10:39:36 ftl -- common/autotest_common.sh@978 -- # wait 84288 00:34:31.873 10:39:38 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:31.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:31.873 Waiting for block devices as requested 00:34:32.145 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:32.145 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:32.145 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:32.403 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:37.676 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:37.676 10:39:44 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:37.676 Remove shared memory files 00:34:37.676 10:39:44 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:37.676 10:39:44 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:37.676 10:39:44 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:37.676 10:39:44 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:37.676 10:39:44 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:37.676 10:39:44 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:37.676 00:34:37.676 real 11m14.914s 00:34:37.676 user 13m54.824s 00:34:37.676 sys 1m31.682s 00:34:37.676 10:39:44 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.676 10:39:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:37.676 ************************************ 00:34:37.676 END TEST ftl 00:34:37.676 ************************************ 00:34:37.676 10:39:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:37.676 10:39:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:37.676 10:39:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:37.676 10:39:44 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:37.676 10:39:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:37.676 10:39:44 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:37.676 10:39:44 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:37.676 10:39:44 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:37.677 10:39:44 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:37.677 10:39:44 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:37.677 10:39:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.677 10:39:44 -- common/autotest_common.sh@10 -- # set +x 00:34:37.677 10:39:44 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:37.677 10:39:44 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:37.677 10:39:44 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:37.677 10:39:44 -- common/autotest_common.sh@10 -- # set +x 00:34:39.581 INFO: APP EXITING 00:34:39.581 INFO: killing all VMs 00:34:39.581 INFO: killing vhost app 00:34:39.581 INFO: EXIT DONE 00:34:40.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:40.716 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:40.716 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:40.716 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:40.716 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:41.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:41.544 Cleaning 00:34:41.544 Removing: /var/run/dpdk/spdk0/config 00:34:41.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:41.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:41.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:41.544 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:41.544 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:41.544 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:41.544 Removing: /var/run/dpdk/spdk0 00:34:41.544 Removing: /var/run/dpdk/spdk_pid57521 00:34:41.544 Removing: /var/run/dpdk/spdk_pid57762 00:34:41.544 Removing: /var/run/dpdk/spdk_pid57996 00:34:41.544 Removing: /var/run/dpdk/spdk_pid58106 00:34:41.544 Removing: /var/run/dpdk/spdk_pid58162 00:34:41.544 Removing: /var/run/dpdk/spdk_pid58290 00:34:41.544 Removing: /var/run/dpdk/spdk_pid58308 00:34:41.544 Removing: /var/run/dpdk/spdk_pid58518 00:34:41.544 Removing: /var/run/dpdk/spdk_pid58632 00:34:41.544 Removing: /var/run/dpdk/spdk_pid58742 00:34:41.544 Removing: /var/run/dpdk/spdk_pid58864 00:34:41.544 Removing: /var/run/dpdk/spdk_pid58972 00:34:41.544 Removing: /var/run/dpdk/spdk_pid59012 00:34:41.544 Removing: /var/run/dpdk/spdk_pid59054 00:34:41.544 Removing: /var/run/dpdk/spdk_pid59124 00:34:41.544 Removing: /var/run/dpdk/spdk_pid59236 00:34:41.544 Removing: /var/run/dpdk/spdk_pid59685 00:34:41.544 Removing: /var/run/dpdk/spdk_pid59766 00:34:41.544 Removing: /var/run/dpdk/spdk_pid59840 00:34:41.544 Removing: /var/run/dpdk/spdk_pid59856 00:34:41.544 Removing: /var/run/dpdk/spdk_pid60018 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60034 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60190 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60206 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60275 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60302 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60366 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60384 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60590 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60621 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60710 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60899 00:34:41.803 Removing: /var/run/dpdk/spdk_pid60999 00:34:41.803 Removing: /var/run/dpdk/spdk_pid61041 00:34:41.803 Removing: /var/run/dpdk/spdk_pid61501 00:34:41.803 Removing: /var/run/dpdk/spdk_pid61599 00:34:41.803 Removing: /var/run/dpdk/spdk_pid61714 00:34:41.803 Removing: /var/run/dpdk/spdk_pid61767 00:34:41.803 Removing: /var/run/dpdk/spdk_pid61793 00:34:41.803 Removing: /var/run/dpdk/spdk_pid61877 00:34:41.803 Removing: /var/run/dpdk/spdk_pid62526 00:34:41.804 Removing: /var/run/dpdk/spdk_pid62568 00:34:41.804 Removing: /var/run/dpdk/spdk_pid63062 00:34:41.804 Removing: /var/run/dpdk/spdk_pid63166 00:34:41.804 Removing: /var/run/dpdk/spdk_pid63276 00:34:41.804 Removing: /var/run/dpdk/spdk_pid63335 00:34:41.804 Removing: /var/run/dpdk/spdk_pid63355 00:34:41.804 Removing: /var/run/dpdk/spdk_pid63386 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65278 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65426 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65430 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65448 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65492 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65496 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65508 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65558 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65562 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65574 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65620 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65629 00:34:41.804 Removing: /var/run/dpdk/spdk_pid65641 00:34:41.804 Removing: /var/run/dpdk/spdk_pid67061 00:34:41.804 Removing: /var/run/dpdk/spdk_pid67170 00:34:41.804 Removing: /var/run/dpdk/spdk_pid68609 00:34:41.804 Removing: /var/run/dpdk/spdk_pid70354 00:34:41.804 Removing: /var/run/dpdk/spdk_pid70434 00:34:41.804 Removing: /var/run/dpdk/spdk_pid70509 00:34:41.804 Removing: /var/run/dpdk/spdk_pid70624 00:34:41.804 Removing: /var/run/dpdk/spdk_pid70716 00:34:41.804 Removing: /var/run/dpdk/spdk_pid70830 00:34:41.804 Removing: /var/run/dpdk/spdk_pid70906 00:34:41.804 Removing: /var/run/dpdk/spdk_pid70991 00:34:41.804 Removing: /var/run/dpdk/spdk_pid71102 00:34:41.804 Removing: /var/run/dpdk/spdk_pid71194 00:34:41.804 Removing: /var/run/dpdk/spdk_pid71295 00:34:41.804 Removing: /var/run/dpdk/spdk_pid71375 00:34:41.804 Removing: /var/run/dpdk/spdk_pid71460 00:34:41.804 Removing: /var/run/dpdk/spdk_pid71571 00:34:41.804 Removing: /var/run/dpdk/spdk_pid71663 00:34:42.063 Removing: /var/run/dpdk/spdk_pid71765 00:34:42.063 Removing: /var/run/dpdk/spdk_pid71845 00:34:42.063 Removing: /var/run/dpdk/spdk_pid71926 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72034 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72132 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72229 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72314 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72388 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72469 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72549 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72656 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72751 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72857 00:34:42.063 Removing: /var/run/dpdk/spdk_pid72935 00:34:42.063 Removing: /var/run/dpdk/spdk_pid73015 00:34:42.063 Removing: /var/run/dpdk/spdk_pid73093 00:34:42.063 Removing: /var/run/dpdk/spdk_pid73168 00:34:42.063 Removing: /var/run/dpdk/spdk_pid73277 00:34:42.063 Removing: /var/run/dpdk/spdk_pid73374 00:34:42.063 Removing: /var/run/dpdk/spdk_pid73528 00:34:42.063 Removing: /var/run/dpdk/spdk_pid73819 00:34:42.063 Removing: /var/run/dpdk/spdk_pid73861 00:34:42.063 Removing: /var/run/dpdk/spdk_pid74317 00:34:42.063 Removing: /var/run/dpdk/spdk_pid74504 00:34:42.063 Removing: /var/run/dpdk/spdk_pid74609 00:34:42.063 Removing: /var/run/dpdk/spdk_pid74720 00:34:42.063 Removing: /var/run/dpdk/spdk_pid74778 00:34:42.063 Removing: /var/run/dpdk/spdk_pid74805 00:34:42.063 Removing: /var/run/dpdk/spdk_pid75121 00:34:42.063 Removing: /var/run/dpdk/spdk_pid75192 00:34:42.063 Removing: /var/run/dpdk/spdk_pid75284 00:34:42.063 Removing: /var/run/dpdk/spdk_pid75715 00:34:42.063 Removing: /var/run/dpdk/spdk_pid75862 00:34:42.063 Removing: /var/run/dpdk/spdk_pid76669 00:34:42.063 Removing: /var/run/dpdk/spdk_pid76818 00:34:42.063 Removing: /var/run/dpdk/spdk_pid77021 00:34:42.063 Removing: /var/run/dpdk/spdk_pid77129 00:34:42.063 Removing: /var/run/dpdk/spdk_pid77460 00:34:42.063 Removing: /var/run/dpdk/spdk_pid77721 00:34:42.063 Removing: /var/run/dpdk/spdk_pid78081 00:34:42.063 Removing: /var/run/dpdk/spdk_pid78311 00:34:42.063 Removing: /var/run/dpdk/spdk_pid78441 00:34:42.063 Removing: /var/run/dpdk/spdk_pid78509 00:34:42.063 Removing: /var/run/dpdk/spdk_pid78642 00:34:42.063 Removing: /var/run/dpdk/spdk_pid78675 00:34:42.063 Removing: /var/run/dpdk/spdk_pid78739 00:34:42.063 Removing: /var/run/dpdk/spdk_pid78949 00:34:42.063 Removing: /var/run/dpdk/spdk_pid79189 00:34:42.063 Removing: /var/run/dpdk/spdk_pid79592 00:34:42.063 Removing: /var/run/dpdk/spdk_pid80011 00:34:42.063 Removing: /var/run/dpdk/spdk_pid80446 00:34:42.063 Removing: /var/run/dpdk/spdk_pid80966 00:34:42.063 Removing: /var/run/dpdk/spdk_pid81109 00:34:42.063 Removing: /var/run/dpdk/spdk_pid81206 00:34:42.063 Removing: /var/run/dpdk/spdk_pid81822 00:34:42.063 Removing: /var/run/dpdk/spdk_pid81891 00:34:42.063 Removing: /var/run/dpdk/spdk_pid82373 00:34:42.335 Removing: /var/run/dpdk/spdk_pid82744 00:34:42.335 Removing: /var/run/dpdk/spdk_pid83244 00:34:42.335 Removing: /var/run/dpdk/spdk_pid83367 00:34:42.335 Removing: /var/run/dpdk/spdk_pid83425 00:34:42.335 Removing: /var/run/dpdk/spdk_pid83490 00:34:42.335 Removing: /var/run/dpdk/spdk_pid83547 00:34:42.335 Removing: /var/run/dpdk/spdk_pid83617 00:34:42.335 Removing: /var/run/dpdk/spdk_pid83814 00:34:42.335 Removing: /var/run/dpdk/spdk_pid83900 00:34:42.335 Removing: /var/run/dpdk/spdk_pid83967 00:34:42.335 Removing: /var/run/dpdk/spdk_pid84044 00:34:42.335 Removing: /var/run/dpdk/spdk_pid84080 00:34:42.335 Removing: /var/run/dpdk/spdk_pid84157 00:34:42.335 Removing: /var/run/dpdk/spdk_pid84288 00:34:42.335 Clean 00:34:42.335 10:39:49 -- common/autotest_common.sh@1453 -- # return 0 00:34:42.335 10:39:49 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:42.335 10:39:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:42.335 10:39:49 -- common/autotest_common.sh@10 -- # set +x 00:34:42.335 10:39:49 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:42.336 10:39:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:42.336 10:39:49 -- common/autotest_common.sh@10 -- # set +x 00:34:42.336 10:39:49 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:42.336 10:39:49 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:42.336 10:39:49 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:42.336 10:39:49 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:42.336 10:39:49 -- spdk/autotest.sh@398 -- # hostname 00:34:42.336 10:39:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:42.596 geninfo: WARNING: invalid characters removed from testname! 00:35:09.141 10:40:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:10.518 10:40:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:12.518 10:40:19 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:15.053 10:40:21 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:16.956 10:40:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:18.861 10:40:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:21.399 10:40:28 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:21.399 10:40:28 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:21.399 10:40:28 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:21.399 10:40:28 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:21.399 10:40:28 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:21.399 10:40:28 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:21.399 + [[ -n 5244 ]] 00:35:21.399 + sudo kill 5244 00:35:21.408 [Pipeline] } 00:35:21.423 [Pipeline] // timeout 00:35:21.428 [Pipeline] } 00:35:21.441 [Pipeline] // stage 00:35:21.447 [Pipeline] } 00:35:21.461 [Pipeline] // catchError 00:35:21.470 [Pipeline] stage 00:35:21.472 [Pipeline] { (Stop VM) 00:35:21.484 [Pipeline] sh 00:35:21.799 + vagrant halt 00:35:24.334 ==> default: Halting domain... 00:35:30.994 [Pipeline] sh 00:35:31.271 + vagrant destroy -f 00:35:33.851 ==> default: Removing domain... 00:35:34.430 [Pipeline] sh 00:35:34.712 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:34.721 [Pipeline] } 00:35:34.736 [Pipeline] // stage 00:35:34.742 [Pipeline] } 00:35:34.755 [Pipeline] // dir 00:35:34.761 [Pipeline] } 00:35:34.778 [Pipeline] // wrap 00:35:34.785 [Pipeline] } 00:35:34.801 [Pipeline] // catchError 00:35:34.813 [Pipeline] stage 00:35:34.817 [Pipeline] { (Epilogue) 00:35:34.833 [Pipeline] sh 00:35:35.119 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:41.700 [Pipeline] catchError 00:35:41.702 [Pipeline] { 00:35:41.715 [Pipeline] sh 00:35:41.997 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:42.256 Artifacts sizes are good 00:35:42.272 [Pipeline] } 00:35:42.310 [Pipeline] // catchError 00:35:42.321 [Pipeline] archiveArtifacts 00:35:42.325 Archiving artifacts 00:35:42.427 [Pipeline] cleanWs 00:35:42.434 [WS-CLEANUP] Deleting project workspace... 00:35:42.434 [WS-CLEANUP] Deferred wipeout is used... 00:35:42.440 [WS-CLEANUP] done 00:35:42.441 [Pipeline] } 00:35:42.449 [Pipeline] // stage 00:35:42.453 [Pipeline] } 00:35:42.462 [Pipeline] // node 00:35:42.465 [Pipeline] End of Pipeline 00:35:42.490 Finished: SUCCESS